00:00:00.000 Started by upstream project "autotest-per-patch" build number 126259 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.125 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.155 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.170 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.170 > git config core.sparsecheckout # timeout=10 00:00:04.182 > git read-tree -mu HEAD # timeout=10 00:00:04.196 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.216 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.216 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.293 [Pipeline] Start of Pipeline 00:00:04.303 [Pipeline] library 00:00:04.304 Loading library shm_lib@master 00:00:04.304 Library shm_lib@master is cached. Copying from home. 00:00:04.321 [Pipeline] node 00:00:04.328 Running on WFP16 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.330 [Pipeline] { 00:00:04.337 [Pipeline] catchError 00:00:04.338 [Pipeline] { 00:00:04.346 [Pipeline] wrap 00:00:04.352 [Pipeline] { 00:00:04.357 [Pipeline] stage 00:00:04.358 [Pipeline] { (Prologue) 00:00:04.518 [Pipeline] sh 00:00:04.800 + logger -p user.info -t JENKINS-CI 00:00:04.822 [Pipeline] echo 00:00:04.851 Node: WFP16 00:00:04.863 [Pipeline] sh 00:00:05.162 [Pipeline] setCustomBuildProperty 00:00:05.175 [Pipeline] echo 00:00:05.176 Cleanup processes 00:00:05.180 [Pipeline] sh 00:00:05.456 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.456 2720389 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.467 [Pipeline] sh 00:00:05.745 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.745 ++ grep -v 'sudo pgrep' 00:00:05.745 ++ awk '{print $1}' 00:00:05.745 + sudo kill -9 00:00:05.745 + true 00:00:05.757 [Pipeline] cleanWs 00:00:05.767 [WS-CLEANUP] Deleting project workspace... 00:00:05.767 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.773 [WS-CLEANUP] done 00:00:05.778 [Pipeline] setCustomBuildProperty 00:00:05.788 [Pipeline] sh 00:00:06.064 + sudo git config --global --replace-all safe.directory '*' 00:00:06.137 [Pipeline] httpRequest 00:00:06.152 [Pipeline] echo 00:00:06.153 Sorcerer 10.211.164.101 is alive 00:00:06.160 [Pipeline] httpRequest 00:00:06.163 HttpMethod: GET 00:00:06.164 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.164 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.170 Response Code: HTTP/1.1 200 OK 00:00:06.171 Success: Status code 200 is in the accepted range: 200,404 00:00:06.171 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.976 [Pipeline] sh 00:00:09.257 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.275 [Pipeline] httpRequest 00:00:09.305 [Pipeline] echo 00:00:09.307 Sorcerer 10.211.164.101 is alive 00:00:09.315 [Pipeline] httpRequest 00:00:09.319 HttpMethod: GET 00:00:09.320 URL: http://10.211.164.101/packages/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:00:09.320 Sending request to url: http://10.211.164.101/packages/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:00:09.336 Response Code: HTTP/1.1 200 OK 00:00:09.336 Success: Status code 200 is in the accepted range: 200,404 00:00:09.337 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:00:56.194 [Pipeline] sh 00:00:56.474 + tar --no-same-owner -xf spdk_e9e51ebfe370461d38e67d0ad17ccb8703729896.tar.gz 00:01:00.679 [Pipeline] sh 00:01:00.962 + git -C spdk log --oneline -n5 00:01:00.962 e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:00.962 fcbf7f00f bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:01:00.962 47ca8c1aa nvme: populate socket_id for rdma controllers 00:01:00.962 c1860effd nvme: populate socket_id for tcp controllers 00:01:00.962 91f51bb85 nvme: populate socket_id for pcie controllers 00:01:00.977 [Pipeline] } 00:01:00.998 [Pipeline] // stage 00:01:01.007 [Pipeline] stage 00:01:01.010 [Pipeline] { (Prepare) 00:01:01.027 [Pipeline] writeFile 00:01:01.044 [Pipeline] sh 00:01:01.326 + logger -p user.info -t JENKINS-CI 00:01:01.338 [Pipeline] sh 00:01:01.619 + logger -p user.info -t JENKINS-CI 00:01:01.632 [Pipeline] sh 00:01:01.916 + cat autorun-spdk.conf 00:01:01.916 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.916 SPDK_TEST_NVMF=1 00:01:01.916 SPDK_TEST_NVME_CLI=1 00:01:01.916 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.916 SPDK_TEST_NVMF_NICS=e810 00:01:01.916 SPDK_TEST_VFIOUSER=1 00:01:01.916 SPDK_RUN_UBSAN=1 00:01:01.916 NET_TYPE=phy 00:01:01.923 RUN_NIGHTLY=0 00:01:01.928 [Pipeline] readFile 00:01:01.956 [Pipeline] withEnv 00:01:01.959 [Pipeline] { 00:01:01.975 [Pipeline] sh 00:01:02.262 + set -ex 00:01:02.262 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:02.262 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.262 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.262 ++ SPDK_TEST_NVMF=1 00:01:02.262 ++ SPDK_TEST_NVME_CLI=1 00:01:02.262 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.262 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.262 ++ SPDK_TEST_VFIOUSER=1 00:01:02.262 ++ SPDK_RUN_UBSAN=1 00:01:02.262 ++ NET_TYPE=phy 00:01:02.262 ++ RUN_NIGHTLY=0 00:01:02.262 + case $SPDK_TEST_NVMF_NICS in 00:01:02.262 + DRIVERS=ice 00:01:02.262 + [[ tcp == \r\d\m\a ]] 00:01:02.262 + [[ -n ice ]] 00:01:02.262 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.262 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.262 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:02.262 rmmod: ERROR: Module irdma is not currently loaded 00:01:02.262 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.262 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.262 + true 00:01:02.262 + for D in $DRIVERS 00:01:02.262 + sudo modprobe ice 00:01:02.262 + exit 0 00:01:02.271 [Pipeline] } 00:01:02.286 [Pipeline] // withEnv 00:01:02.290 [Pipeline] } 00:01:02.306 [Pipeline] // stage 00:01:02.315 [Pipeline] catchError 00:01:02.317 [Pipeline] { 00:01:02.331 [Pipeline] timeout 00:01:02.332 Timeout set to expire in 50 min 00:01:02.333 [Pipeline] { 00:01:02.347 [Pipeline] stage 00:01:02.350 [Pipeline] { (Tests) 00:01:02.367 [Pipeline] sh 00:01:02.655 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.655 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.655 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.655 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:02.655 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.655 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.655 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:02.655 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.655 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:02.655 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:02.655 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:02.655 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:02.655 + source /etc/os-release 00:01:02.655 ++ NAME='Fedora Linux' 00:01:02.655 ++ VERSION='38 (Cloud Edition)' 00:01:02.655 ++ ID=fedora 00:01:02.655 ++ VERSION_ID=38 00:01:02.655 ++ VERSION_CODENAME= 00:01:02.655 ++ PLATFORM_ID=platform:f38 00:01:02.655 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:02.655 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:02.655 ++ LOGO=fedora-logo-icon 00:01:02.655 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:02.655 ++ HOME_URL=https://fedoraproject.org/ 00:01:02.655 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:02.655 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:02.655 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:02.655 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:02.655 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:02.655 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:02.655 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:02.655 ++ SUPPORT_END=2024-05-14 00:01:02.655 ++ VARIANT='Cloud Edition' 00:01:02.655 ++ VARIANT_ID=cloud 00:01:02.655 + uname -a 00:01:02.655 Linux spdk-wfp-16 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:02.655 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:05.189 Hugepages 00:01:05.189 node hugesize free / total 00:01:05.189 node0 1048576kB 0 / 0 00:01:05.189 node0 2048kB 0 / 0 00:01:05.189 node1 1048576kB 0 / 0 00:01:05.189 node1 2048kB 0 / 0 00:01:05.189 00:01:05.189 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:05.189 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:05.189 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:05.189 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:05.189 + rm -f /tmp/spdk-ld-path 00:01:05.189 + source autorun-spdk.conf 00:01:05.189 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.189 ++ SPDK_TEST_NVMF=1 00:01:05.189 ++ SPDK_TEST_NVME_CLI=1 00:01:05.189 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.189 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.189 ++ SPDK_TEST_VFIOUSER=1 00:01:05.189 ++ SPDK_RUN_UBSAN=1 00:01:05.189 ++ NET_TYPE=phy 00:01:05.189 ++ RUN_NIGHTLY=0 00:01:05.189 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:05.189 + [[ -n '' ]] 00:01:05.189 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.189 + for M in /var/spdk/build-*-manifest.txt 00:01:05.189 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:05.189 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.189 + for M in /var/spdk/build-*-manifest.txt 00:01:05.189 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:05.189 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:05.189 ++ uname 00:01:05.189 + [[ Linux == \L\i\n\u\x ]] 00:01:05.189 + sudo dmesg -T 00:01:05.189 + sudo dmesg --clear 00:01:05.448 + dmesg_pid=2721823 00:01:05.448 + [[ Fedora Linux == FreeBSD ]] 00:01:05.448 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.448 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:05.448 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:05.448 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:05.448 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:05.448 + sudo dmesg -Tw 00:01:05.448 + [[ -x /usr/src/fio-static/fio ]] 00:01:05.448 + export FIO_BIN=/usr/src/fio-static/fio 00:01:05.448 + FIO_BIN=/usr/src/fio-static/fio 00:01:05.448 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:05.448 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:05.448 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:05.448 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.448 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:05.448 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:05.448 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.448 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:05.448 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.448 Test configuration: 00:01:05.448 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.448 SPDK_TEST_NVMF=1 00:01:05.449 SPDK_TEST_NVME_CLI=1 00:01:05.449 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.449 SPDK_TEST_NVMF_NICS=e810 00:01:05.449 SPDK_TEST_VFIOUSER=1 00:01:05.449 SPDK_RUN_UBSAN=1 00:01:05.449 NET_TYPE=phy 00:01:05.449 RUN_NIGHTLY=0 00:27:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:05.449 00:27:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:05.449 00:27:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:05.449 00:27:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:05.449 00:27:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.449 00:27:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.449 00:27:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.449 00:27:23 -- paths/export.sh@5 -- $ export PATH 00:01:05.449 00:27:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:05.449 00:27:23 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:05.449 00:27:23 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:05.449 00:27:23 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721082443.XXXXXX 00:01:05.449 00:27:23 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082443.lIbH9v 00:01:05.449 00:27:23 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:05.449 00:27:23 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:05.449 00:27:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:05.449 00:27:23 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:05.449 00:27:23 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:05.449 00:27:23 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:05.449 00:27:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:05.449 00:27:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.449 00:27:23 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:05.449 00:27:23 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:05.449 00:27:23 -- pm/common@17 -- $ local monitor 00:01:05.449 00:27:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.449 00:27:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.449 00:27:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.449 00:27:23 -- pm/common@21 -- $ date +%s 00:01:05.449 00:27:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:05.449 00:27:23 -- pm/common@21 -- $ date +%s 00:01:05.449 00:27:23 -- pm/common@25 -- $ sleep 1 00:01:05.449 00:27:23 -- pm/common@21 -- $ date +%s 00:01:05.449 00:27:23 -- pm/common@21 -- $ date +%s 00:01:05.449 00:27:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721082443 00:01:05.449 00:27:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721082443 00:01:05.449 00:27:23 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721082443 00:01:05.449 00:27:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721082443 00:01:05.449 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721082443_collect-vmstat.pm.log 00:01:05.449 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721082443_collect-cpu-load.pm.log 00:01:05.449 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721082443_collect-cpu-temp.pm.log 00:01:05.449 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721082443_collect-bmc-pm.bmc.pm.log 00:01:06.385 00:27:24 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:06.385 00:27:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:06.385 00:27:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:06.385 00:27:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.385 00:27:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:06.385 Mon Jul 15 10:27:24 PM UTC 2024 00:01:06.385 00:27:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:06.385 v24.09-pre-235-ge9e51ebfe 00:01:06.385 00:27:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:06.385 00:27:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:06.385 00:27:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:06.385 00:27:24 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:06.385 00:27:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:06.385 00:27:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.643 ************************************ 00:01:06.643 START TEST ubsan 00:01:06.643 ************************************ 00:01:06.643 00:27:24 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:06.643 using ubsan 00:01:06.643 00:01:06.643 real 0m0.000s 00:01:06.643 user 0m0.000s 00:01:06.643 sys 0m0.000s 00:01:06.643 00:27:24 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:06.643 00:27:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:06.643 ************************************ 00:01:06.643 END TEST ubsan 00:01:06.643 ************************************ 00:01:06.643 00:27:24 -- common/autotest_common.sh@1142 -- $ return 0 00:01:06.643 00:27:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:06.643 00:27:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:06.643 00:27:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:06.643 00:27:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:06.643 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.643 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:07.209 Using 'verbs' RDMA provider 00:01:22.666 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:37.549 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:37.549 Creating mk/config.mk...done. 00:01:37.549 Creating mk/cc.flags.mk...done. 00:01:37.549 Type 'make' to build. 00:01:37.549 00:27:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:37.549 00:27:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:37.549 00:27:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:37.549 00:27:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.549 ************************************ 00:01:37.549 START TEST make 00:01:37.549 ************************************ 00:01:37.549 00:27:53 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:37.549 make[1]: Nothing to be done for 'all'. 00:01:37.549 The Meson build system 00:01:37.549 Version: 1.3.1 00:01:37.549 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:37.549 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.549 Build type: native build 00:01:37.549 Project name: libvfio-user 00:01:37.549 Project version: 0.0.1 00:01:37.549 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.549 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.549 Host machine cpu family: x86_64 00:01:37.549 Host machine cpu: x86_64 00:01:37.549 Run-time dependency threads found: YES 00:01:37.549 Library dl found: YES 00:01:37.549 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.549 Run-time dependency json-c found: YES 0.17 00:01:37.549 Run-time dependency cmocka found: YES 1.1.7 00:01:37.549 Program pytest-3 found: NO 00:01:37.549 Program flake8 found: NO 00:01:37.549 Program misspell-fixer found: NO 00:01:37.549 Program restructuredtext-lint found: NO 00:01:37.549 Program valgrind found: YES (/usr/bin/valgrind) 00:01:37.549 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.549 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.549 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.549 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.549 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:37.549 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:37.549 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:37.549 Build targets in project: 8 00:01:37.549 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:37.549 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:37.549 00:01:37.549 libvfio-user 0.0.1 00:01:37.549 00:01:37.549 User defined options 00:01:37.549 buildtype : debug 00:01:37.549 default_library: shared 00:01:37.549 libdir : /usr/local/lib 00:01:37.549 00:01:37.549 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.114 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.114 [1/37] Compiling C object samples/null.p/null.c.o 00:01:38.114 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:38.114 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:38.114 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.114 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:38.114 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:38.114 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:38.114 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:38.114 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:38.114 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:38.114 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:38.114 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.114 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:38.114 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:38.372 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:38.372 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.372 [17/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:38.372 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.372 [19/37] Compiling C object samples/server.p/server.c.o 00:01:38.372 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:38.372 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.372 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.372 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:38.372 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:38.372 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:38.372 [26/37] Compiling C object samples/client.p/client.c.o 00:01:38.372 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:38.372 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:38.372 [29/37] Linking target samples/client 00:01:38.372 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:38.630 [31/37] Linking target test/unit_tests 00:01:38.630 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:38.630 [33/37] Linking target samples/null 00:01:38.630 [34/37] Linking target samples/server 00:01:38.630 [35/37] Linking target samples/gpio-pci-idio-16 00:01:38.630 [36/37] Linking target samples/lspci 00:01:38.630 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:38.630 INFO: autodetecting backend as ninja 00:01:38.630 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.630 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.198 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.198 ninja: no work to do. 00:01:45.767 The Meson build system 00:01:45.767 Version: 1.3.1 00:01:45.767 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.767 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.767 Build type: native build 00:01:45.767 Program cat found: YES (/usr/bin/cat) 00:01:45.767 Project name: DPDK 00:01:45.767 Project version: 24.03.0 00:01:45.767 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.767 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.767 Host machine cpu family: x86_64 00:01:45.767 Host machine cpu: x86_64 00:01:45.767 Message: ## Building in Developer Mode ## 00:01:45.767 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.767 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.768 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.768 Program python3 found: YES (/usr/bin/python3) 00:01:45.768 Program cat found: YES (/usr/bin/cat) 00:01:45.768 Compiler for C supports arguments -march=native: YES 00:01:45.768 Checking for size of "void *" : 8 00:01:45.768 Checking for size of "void *" : 8 (cached) 00:01:45.768 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:45.768 Library m found: YES 00:01:45.768 Library numa found: YES 00:01:45.768 Has header "numaif.h" : YES 00:01:45.768 Library fdt found: NO 00:01:45.768 Library execinfo found: NO 00:01:45.768 Has header "execinfo.h" : YES 00:01:45.768 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.768 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.768 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.768 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.768 Run-time dependency openssl found: YES 3.0.9 00:01:45.768 Run-time dependency libpcap found: YES 1.10.4 00:01:45.768 Has header "pcap.h" with dependency libpcap: YES 00:01:45.768 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.768 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.768 Compiler for C supports arguments -Wformat: YES 00:01:45.768 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.768 Compiler for C supports arguments -Wformat-security: NO 00:01:45.768 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.768 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.768 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.768 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.768 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.768 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.768 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.768 Compiler for C supports arguments -Wundef: YES 00:01:45.768 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.768 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.768 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.768 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.768 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.768 Program objdump found: YES (/usr/bin/objdump) 00:01:45.768 Compiler for C supports arguments -mavx512f: YES 00:01:45.768 Checking if "AVX512 checking" compiles: YES 00:01:45.768 Fetching value of define "__SSE4_2__" : 1 00:01:45.768 Fetching value of define "__AES__" : 1 00:01:45.768 Fetching value of define "__AVX__" : 1 00:01:45.768 Fetching value of define "__AVX2__" : 1 00:01:45.768 Fetching value of define "__AVX512BW__" : 1 00:01:45.768 Fetching value of define "__AVX512CD__" : 1 00:01:45.768 Fetching value of define "__AVX512DQ__" : 1 00:01:45.768 Fetching value of define "__AVX512F__" : 1 00:01:45.768 Fetching value of define "__AVX512VL__" : 1 00:01:45.768 Fetching value of define "__PCLMUL__" : 1 00:01:45.768 Fetching value of define "__RDRND__" : 1 00:01:45.768 Fetching value of define "__RDSEED__" : 1 00:01:45.768 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.768 Fetching value of define "__znver1__" : (undefined) 00:01:45.768 Fetching value of define "__znver2__" : (undefined) 00:01:45.768 Fetching value of define "__znver3__" : (undefined) 00:01:45.768 Fetching value of define "__znver4__" : (undefined) 00:01:45.768 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.768 Message: lib/log: Defining dependency "log" 00:01:45.768 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.768 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.768 Checking for function "getentropy" : NO 00:01:45.768 Message: lib/eal: Defining dependency "eal" 00:01:45.768 Message: lib/ring: Defining dependency "ring" 00:01:45.768 Message: lib/rcu: Defining dependency "rcu" 00:01:45.768 Message: lib/mempool: Defining dependency "mempool" 00:01:45.768 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.768 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.768 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.768 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.768 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.768 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.768 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:45.768 Compiler for C supports arguments -mpclmul: YES 00:01:45.768 Compiler for C supports arguments -maes: YES 00:01:45.768 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.768 Compiler for C supports arguments -mavx512bw: YES 00:01:45.768 Compiler for C supports arguments -mavx512dq: YES 00:01:45.768 Compiler for C supports arguments -mavx512vl: YES 00:01:45.768 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.768 Compiler for C supports arguments -mavx2: YES 00:01:45.768 Compiler for C supports arguments -mavx: YES 00:01:45.768 Message: lib/net: Defining dependency "net" 00:01:45.768 Message: lib/meter: Defining dependency "meter" 00:01:45.768 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.768 Message: lib/pci: Defining dependency "pci" 00:01:45.768 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.768 Message: lib/hash: Defining dependency "hash" 00:01:45.768 Message: lib/timer: Defining dependency "timer" 00:01:45.768 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.768 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.768 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.768 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.768 Message: lib/power: Defining dependency "power" 00:01:45.768 Message: lib/reorder: Defining dependency "reorder" 00:01:45.768 Message: lib/security: Defining dependency "security" 00:01:45.768 Has header "linux/userfaultfd.h" : YES 00:01:45.768 Has header "linux/vduse.h" : YES 00:01:45.768 Message: lib/vhost: Defining dependency "vhost" 00:01:45.768 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.768 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.768 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.768 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.768 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.768 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.768 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.768 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.768 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.768 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.768 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.768 Configuring doxy-api-html.conf using configuration 00:01:45.768 Configuring doxy-api-man.conf using configuration 00:01:45.768 Program mandb found: YES (/usr/bin/mandb) 00:01:45.768 Program sphinx-build found: NO 00:01:45.768 Configuring rte_build_config.h using configuration 00:01:45.768 Message: 00:01:45.768 ================= 00:01:45.768 Applications Enabled 00:01:45.768 ================= 00:01:45.768 00:01:45.768 apps: 00:01:45.768 00:01:45.768 00:01:45.768 Message: 00:01:45.768 ================= 00:01:45.768 Libraries Enabled 00:01:45.768 ================= 00:01:45.768 00:01:45.768 libs: 00:01:45.768 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.768 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.768 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.768 00:01:45.768 Message: 00:01:45.768 =============== 00:01:45.768 Drivers Enabled 00:01:45.768 =============== 00:01:45.768 00:01:45.768 common: 00:01:45.768 00:01:45.768 bus: 00:01:45.768 pci, vdev, 00:01:45.768 mempool: 00:01:45.768 ring, 00:01:45.768 dma: 00:01:45.768 00:01:45.768 net: 00:01:45.768 00:01:45.768 crypto: 00:01:45.768 00:01:45.768 compress: 00:01:45.768 00:01:45.768 vdpa: 00:01:45.768 00:01:45.768 00:01:45.768 Message: 00:01:45.768 ================= 00:01:45.768 Content Skipped 00:01:45.768 ================= 00:01:45.768 00:01:45.768 apps: 00:01:45.768 dumpcap: explicitly disabled via build config 00:01:45.768 graph: explicitly disabled via build config 00:01:45.768 pdump: explicitly disabled via build config 00:01:45.768 proc-info: explicitly disabled via build config 00:01:45.768 test-acl: explicitly disabled via build config 00:01:45.768 test-bbdev: explicitly disabled via build config 00:01:45.768 test-cmdline: explicitly disabled via build config 00:01:45.768 test-compress-perf: explicitly disabled via build config 00:01:45.768 test-crypto-perf: explicitly disabled via build config 00:01:45.768 test-dma-perf: explicitly disabled via build config 00:01:45.768 test-eventdev: explicitly disabled via build config 00:01:45.768 test-fib: explicitly disabled via build config 00:01:45.768 test-flow-perf: explicitly disabled via build config 00:01:45.769 test-gpudev: explicitly disabled via build config 00:01:45.769 test-mldev: explicitly disabled via build config 00:01:45.769 test-pipeline: explicitly disabled via build config 00:01:45.769 test-pmd: explicitly disabled via build config 00:01:45.769 test-regex: explicitly disabled via build config 00:01:45.769 test-sad: explicitly disabled via build config 00:01:45.769 test-security-perf: explicitly disabled via build config 00:01:45.769 00:01:45.769 libs: 00:01:45.769 argparse: explicitly disabled via build config 00:01:45.769 metrics: explicitly disabled via build config 00:01:45.769 acl: explicitly disabled via build config 00:01:45.769 bbdev: explicitly disabled via build config 00:01:45.769 bitratestats: explicitly disabled via build config 00:01:45.769 bpf: explicitly disabled via build config 00:01:45.769 cfgfile: explicitly disabled via build config 00:01:45.769 distributor: explicitly disabled via build config 00:01:45.769 efd: explicitly disabled via build config 00:01:45.769 eventdev: explicitly disabled via build config 00:01:45.769 dispatcher: explicitly disabled via build config 00:01:45.769 gpudev: explicitly disabled via build config 00:01:45.769 gro: explicitly disabled via build config 00:01:45.769 gso: explicitly disabled via build config 00:01:45.769 ip_frag: explicitly disabled via build config 00:01:45.769 jobstats: explicitly disabled via build config 00:01:45.769 latencystats: explicitly disabled via build config 00:01:45.769 lpm: explicitly disabled via build config 00:01:45.769 member: explicitly disabled via build config 00:01:45.769 pcapng: explicitly disabled via build config 00:01:45.769 rawdev: explicitly disabled via build config 00:01:45.769 regexdev: explicitly disabled via build config 00:01:45.769 mldev: explicitly disabled via build config 00:01:45.769 rib: explicitly disabled via build config 00:01:45.769 sched: explicitly disabled via build config 00:01:45.769 stack: explicitly disabled via build config 00:01:45.769 ipsec: explicitly disabled via build config 00:01:45.769 pdcp: explicitly disabled via build config 00:01:45.769 fib: explicitly disabled via build config 00:01:45.769 port: explicitly disabled via build config 00:01:45.769 pdump: explicitly disabled via build config 00:01:45.769 table: explicitly disabled via build config 00:01:45.769 pipeline: explicitly disabled via build config 00:01:45.769 graph: explicitly disabled via build config 00:01:45.769 node: explicitly disabled via build config 00:01:45.769 00:01:45.769 drivers: 00:01:45.769 common/cpt: not in enabled drivers build config 00:01:45.769 common/dpaax: not in enabled drivers build config 00:01:45.769 common/iavf: not in enabled drivers build config 00:01:45.769 common/idpf: not in enabled drivers build config 00:01:45.769 common/ionic: not in enabled drivers build config 00:01:45.769 common/mvep: not in enabled drivers build config 00:01:45.769 common/octeontx: not in enabled drivers build config 00:01:45.769 bus/auxiliary: not in enabled drivers build config 00:01:45.769 bus/cdx: not in enabled drivers build config 00:01:45.769 bus/dpaa: not in enabled drivers build config 00:01:45.769 bus/fslmc: not in enabled drivers build config 00:01:45.769 bus/ifpga: not in enabled drivers build config 00:01:45.769 bus/platform: not in enabled drivers build config 00:01:45.769 bus/uacce: not in enabled drivers build config 00:01:45.769 bus/vmbus: not in enabled drivers build config 00:01:45.769 common/cnxk: not in enabled drivers build config 00:01:45.769 common/mlx5: not in enabled drivers build config 00:01:45.769 common/nfp: not in enabled drivers build config 00:01:45.769 common/nitrox: not in enabled drivers build config 00:01:45.769 common/qat: not in enabled drivers build config 00:01:45.769 common/sfc_efx: not in enabled drivers build config 00:01:45.769 mempool/bucket: not in enabled drivers build config 00:01:45.769 mempool/cnxk: not in enabled drivers build config 00:01:45.769 mempool/dpaa: not in enabled drivers build config 00:01:45.769 mempool/dpaa2: not in enabled drivers build config 00:01:45.769 mempool/octeontx: not in enabled drivers build config 00:01:45.769 mempool/stack: not in enabled drivers build config 00:01:45.769 dma/cnxk: not in enabled drivers build config 00:01:45.769 dma/dpaa: not in enabled drivers build config 00:01:45.769 dma/dpaa2: not in enabled drivers build config 00:01:45.769 dma/hisilicon: not in enabled drivers build config 00:01:45.769 dma/idxd: not in enabled drivers build config 00:01:45.769 dma/ioat: not in enabled drivers build config 00:01:45.769 dma/skeleton: not in enabled drivers build config 00:01:45.769 net/af_packet: not in enabled drivers build config 00:01:45.769 net/af_xdp: not in enabled drivers build config 00:01:45.769 net/ark: not in enabled drivers build config 00:01:45.769 net/atlantic: not in enabled drivers build config 00:01:45.769 net/avp: not in enabled drivers build config 00:01:45.769 net/axgbe: not in enabled drivers build config 00:01:45.769 net/bnx2x: not in enabled drivers build config 00:01:45.769 net/bnxt: not in enabled drivers build config 00:01:45.769 net/bonding: not in enabled drivers build config 00:01:45.769 net/cnxk: not in enabled drivers build config 00:01:45.769 net/cpfl: not in enabled drivers build config 00:01:45.769 net/cxgbe: not in enabled drivers build config 00:01:45.769 net/dpaa: not in enabled drivers build config 00:01:45.769 net/dpaa2: not in enabled drivers build config 00:01:45.769 net/e1000: not in enabled drivers build config 00:01:45.769 net/ena: not in enabled drivers build config 00:01:45.769 net/enetc: not in enabled drivers build config 00:01:45.769 net/enetfec: not in enabled drivers build config 00:01:45.769 net/enic: not in enabled drivers build config 00:01:45.769 net/failsafe: not in enabled drivers build config 00:01:45.769 net/fm10k: not in enabled drivers build config 00:01:45.769 net/gve: not in enabled drivers build config 00:01:45.769 net/hinic: not in enabled drivers build config 00:01:45.769 net/hns3: not in enabled drivers build config 00:01:45.769 net/i40e: not in enabled drivers build config 00:01:45.769 net/iavf: not in enabled drivers build config 00:01:45.769 net/ice: not in enabled drivers build config 00:01:45.769 net/idpf: not in enabled drivers build config 00:01:45.769 net/igc: not in enabled drivers build config 00:01:45.769 net/ionic: not in enabled drivers build config 00:01:45.769 net/ipn3ke: not in enabled drivers build config 00:01:45.769 net/ixgbe: not in enabled drivers build config 00:01:45.769 net/mana: not in enabled drivers build config 00:01:45.769 net/memif: not in enabled drivers build config 00:01:45.769 net/mlx4: not in enabled drivers build config 00:01:45.769 net/mlx5: not in enabled drivers build config 00:01:45.769 net/mvneta: not in enabled drivers build config 00:01:45.769 net/mvpp2: not in enabled drivers build config 00:01:45.769 net/netvsc: not in enabled drivers build config 00:01:45.769 net/nfb: not in enabled drivers build config 00:01:45.769 net/nfp: not in enabled drivers build config 00:01:45.769 net/ngbe: not in enabled drivers build config 00:01:45.769 net/null: not in enabled drivers build config 00:01:45.769 net/octeontx: not in enabled drivers build config 00:01:45.769 net/octeon_ep: not in enabled drivers build config 00:01:45.769 net/pcap: not in enabled drivers build config 00:01:45.769 net/pfe: not in enabled drivers build config 00:01:45.769 net/qede: not in enabled drivers build config 00:01:45.769 net/ring: not in enabled drivers build config 00:01:45.769 net/sfc: not in enabled drivers build config 00:01:45.769 net/softnic: not in enabled drivers build config 00:01:45.769 net/tap: not in enabled drivers build config 00:01:45.769 net/thunderx: not in enabled drivers build config 00:01:45.769 net/txgbe: not in enabled drivers build config 00:01:45.769 net/vdev_netvsc: not in enabled drivers build config 00:01:45.769 net/vhost: not in enabled drivers build config 00:01:45.769 net/virtio: not in enabled drivers build config 00:01:45.769 net/vmxnet3: not in enabled drivers build config 00:01:45.769 raw/*: missing internal dependency, "rawdev" 00:01:45.769 crypto/armv8: not in enabled drivers build config 00:01:45.769 crypto/bcmfs: not in enabled drivers build config 00:01:45.769 crypto/caam_jr: not in enabled drivers build config 00:01:45.769 crypto/ccp: not in enabled drivers build config 00:01:45.769 crypto/cnxk: not in enabled drivers build config 00:01:45.769 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.769 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.769 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.769 crypto/mlx5: not in enabled drivers build config 00:01:45.769 crypto/mvsam: not in enabled drivers build config 00:01:45.769 crypto/nitrox: not in enabled drivers build config 00:01:45.769 crypto/null: not in enabled drivers build config 00:01:45.769 crypto/octeontx: not in enabled drivers build config 00:01:45.769 crypto/openssl: not in enabled drivers build config 00:01:45.769 crypto/scheduler: not in enabled drivers build config 00:01:45.769 crypto/uadk: not in enabled drivers build config 00:01:45.769 crypto/virtio: not in enabled drivers build config 00:01:45.769 compress/isal: not in enabled drivers build config 00:01:45.769 compress/mlx5: not in enabled drivers build config 00:01:45.769 compress/nitrox: not in enabled drivers build config 00:01:45.769 compress/octeontx: not in enabled drivers build config 00:01:45.769 compress/zlib: not in enabled drivers build config 00:01:45.769 regex/*: missing internal dependency, "regexdev" 00:01:45.769 ml/*: missing internal dependency, "mldev" 00:01:45.769 vdpa/ifc: not in enabled drivers build config 00:01:45.769 vdpa/mlx5: not in enabled drivers build config 00:01:45.769 vdpa/nfp: not in enabled drivers build config 00:01:45.769 vdpa/sfc: not in enabled drivers build config 00:01:45.769 event/*: missing internal dependency, "eventdev" 00:01:45.769 baseband/*: missing internal dependency, "bbdev" 00:01:45.769 gpu/*: missing internal dependency, "gpudev" 00:01:45.769 00:01:45.769 00:01:45.769 Build targets in project: 85 00:01:45.769 00:01:45.769 DPDK 24.03.0 00:01:45.769 00:01:45.769 User defined options 00:01:45.769 buildtype : debug 00:01:45.769 default_library : shared 00:01:45.769 libdir : lib 00:01:45.769 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.769 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:45.769 c_link_args : 00:01:45.769 cpu_instruction_set: native 00:01:45.769 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:45.770 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:45.770 enable_docs : false 00:01:45.770 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.770 enable_kmods : false 00:01:45.770 max_lcores : 128 00:01:45.770 tests : false 00:01:45.770 00:01:45.770 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.770 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.770 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.770 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.034 [3/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.034 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.034 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.034 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.034 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.034 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.034 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.034 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.034 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.034 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.034 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.034 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.034 [15/268] Linking static target lib/librte_kvargs.a 00:01:46.034 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.034 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.034 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.034 [19/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.034 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.034 [21/268] Linking static target lib/librte_log.a 00:01:46.034 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.034 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.034 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.034 [25/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.034 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.295 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.295 [28/268] Linking static target lib/librte_pci.a 00:01:46.295 [29/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:46.295 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.295 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.295 [32/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.295 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.295 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.552 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.552 [36/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.552 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.552 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.552 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.552 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.552 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.552 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.552 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.552 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.552 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.552 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.552 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.552 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.552 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.552 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.552 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.552 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.552 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.552 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.552 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.552 [56/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.552 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.552 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.552 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.552 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.552 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.552 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.552 [63/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.552 [64/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.552 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.552 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.552 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.552 [68/268] Linking static target lib/librte_telemetry.a 00:01:46.552 [69/268] Linking static target lib/librte_ring.a 00:01:46.552 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.552 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.552 [72/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.552 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.552 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.552 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.552 [76/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.552 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.552 [78/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.552 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.552 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.552 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.552 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.552 [83/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.552 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.552 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.810 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.810 [87/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.810 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.810 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.810 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.810 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.810 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.810 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.810 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.810 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.810 [96/268] Linking static target lib/librte_timer.a 00:01:46.810 [97/268] Linking static target lib/librte_meter.a 00:01:46.810 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.810 [99/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.810 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.810 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.810 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.810 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.810 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.810 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.810 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.810 [107/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.810 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.810 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.810 [110/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.810 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.810 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.810 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.810 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.810 [115/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.810 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.810 [117/268] Linking static target lib/librte_cmdline.a 00:01:46.810 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.810 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.810 [120/268] Linking static target lib/librte_mempool.a 00:01:46.810 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.810 [122/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.810 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.810 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.810 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.810 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.810 [127/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.810 [128/268] Linking static target lib/librte_rcu.a 00:01:46.810 [129/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.810 [130/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.810 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.810 [132/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.810 [133/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.810 [134/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.810 [135/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.810 [136/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.810 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.810 [138/268] Linking static target lib/librte_compressdev.a 00:01:46.810 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.810 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.810 [141/268] Linking static target lib/librte_dmadev.a 00:01:47.067 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.067 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.067 [144/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.067 [145/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.067 [146/268] Linking static target lib/librte_mbuf.a 00:01:47.067 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.067 [148/268] Linking target lib/librte_log.so.24.1 00:01:47.067 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.067 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.067 [151/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.067 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.067 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.067 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.067 [155/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.067 [156/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.067 [157/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.067 [158/268] Linking static target lib/librte_net.a 00:01:47.067 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.067 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.067 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.067 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.067 [163/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.067 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.067 [165/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.067 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.067 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.067 [168/268] Linking static target lib/librte_eal.a 00:01:47.067 [169/268] Linking static target lib/librte_power.a 00:01:47.067 [170/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.067 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.067 [172/268] Linking target lib/librte_kvargs.so.24.1 00:01:47.067 [173/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.067 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.067 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:47.325 [176/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.325 [177/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.325 [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.325 [179/268] Linking static target lib/librte_security.a 00:01:47.325 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.325 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.325 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.325 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.325 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.325 [185/268] Linking static target lib/librte_hash.a 00:01:47.325 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.325 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.325 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.325 [189/268] Linking static target lib/librte_reorder.a 00:01:47.325 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.325 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:47.325 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.325 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.325 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.325 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.325 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.325 [197/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.325 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.325 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.325 [200/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.582 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.582 [202/268] Linking static target lib/librte_cryptodev.a 00:01:47.582 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.582 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.582 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.582 [206/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.582 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:47.582 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.582 [209/268] Linking static target drivers/librte_mempool_ring.a 00:01:47.582 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.582 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.582 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.582 [213/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.582 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:47.582 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.582 [216/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.582 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.839 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.839 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.839 [220/268] Linking static target lib/librte_ethdev.a 00:01:47.839 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.839 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.097 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.097 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.097 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:48.097 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.354 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.383 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.663 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:49.663 [230/268] Linking static target lib/librte_vhost.a 00:01:51.569 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.845 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.780 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.780 [234/268] Linking target lib/librte_eal.so.24.1 00:01:57.780 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.039 [236/268] Linking target lib/librte_meter.so.24.1 00:01:58.039 [237/268] Linking target lib/librte_pci.so.24.1 00:01:58.039 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:58.039 [239/268] Linking target lib/librte_timer.so.24.1 00:01:58.039 [240/268] Linking target lib/librte_ring.so.24.1 00:01:58.039 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.039 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.039 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.039 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.039 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.039 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.039 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.039 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:58.039 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:58.298 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.298 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.298 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.298 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:58.557 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.557 [255/268] Linking target lib/librte_net.so.24.1 00:01:58.557 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:58.557 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:58.557 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:58.815 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.815 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.815 [261/268] Linking target lib/librte_security.so.24.1 00:01:58.815 [262/268] Linking target lib/librte_hash.so.24.1 00:01:58.815 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:58.815 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.074 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.074 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.074 [267/268] Linking target lib/librte_vhost.so.24.1 00:01:59.074 [268/268] Linking target lib/librte_power.so.24.1 00:01:59.074 INFO: autodetecting backend as ninja 00:01:59.074 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:00.451 CC lib/log/log.o 00:02:00.451 CC lib/log/log_deprecated.o 00:02:00.451 CC lib/log/log_flags.o 00:02:00.451 CC lib/ut_mock/mock.o 00:02:00.451 CC lib/ut/ut.o 00:02:00.451 LIB libspdk_log.a 00:02:00.451 LIB libspdk_ut_mock.a 00:02:00.451 LIB libspdk_ut.a 00:02:00.451 SO libspdk_ut.so.2.0 00:02:00.451 SO libspdk_log.so.7.0 00:02:00.451 SO libspdk_ut_mock.so.6.0 00:02:00.710 SYMLINK libspdk_ut_mock.so 00:02:00.710 SYMLINK libspdk_ut.so 00:02:00.710 SYMLINK libspdk_log.so 00:02:00.968 CC lib/dma/dma.o 00:02:00.968 CC lib/ioat/ioat.o 00:02:00.968 CXX lib/trace_parser/trace.o 00:02:00.968 CC lib/util/base64.o 00:02:00.968 CC lib/util/bit_array.o 00:02:00.968 CC lib/util/cpuset.o 00:02:00.968 CC lib/util/crc16.o 00:02:00.968 CC lib/util/crc32.o 00:02:00.968 CC lib/util/crc32c.o 00:02:00.968 CC lib/util/crc32_ieee.o 00:02:00.968 CC lib/util/crc64.o 00:02:00.968 CC lib/util/dif.o 00:02:00.968 CC lib/util/fd.o 00:02:00.968 CC lib/util/fd_group.o 00:02:00.968 CC lib/util/file.o 00:02:00.968 CC lib/util/hexlify.o 00:02:00.968 CC lib/util/iov.o 00:02:00.968 CC lib/util/math.o 00:02:00.968 CC lib/util/net.o 00:02:00.968 CC lib/util/pipe.o 00:02:00.968 CC lib/util/strerror_tls.o 00:02:00.968 CC lib/util/string.o 00:02:00.968 CC lib/util/uuid.o 00:02:00.968 CC lib/util/xor.o 00:02:00.968 CC lib/util/zipf.o 00:02:01.226 CC lib/vfio_user/host/vfio_user_pci.o 00:02:01.226 CC lib/vfio_user/host/vfio_user.o 00:02:01.226 LIB libspdk_dma.a 00:02:01.226 SO libspdk_dma.so.4.0 00:02:01.226 LIB libspdk_ioat.a 00:02:01.226 SYMLINK libspdk_dma.so 00:02:01.226 SO libspdk_ioat.so.7.0 00:02:01.484 SYMLINK libspdk_ioat.so 00:02:01.484 LIB libspdk_vfio_user.a 00:02:01.484 SO libspdk_vfio_user.so.5.0 00:02:01.484 SYMLINK libspdk_vfio_user.so 00:02:01.484 LIB libspdk_util.a 00:02:01.743 SO libspdk_util.so.9.1 00:02:01.743 SYMLINK libspdk_util.so 00:02:02.002 LIB libspdk_trace_parser.a 00:02:02.002 SO libspdk_trace_parser.so.5.0 00:02:02.002 SYMLINK libspdk_trace_parser.so 00:02:02.260 CC lib/rdma_provider/common.o 00:02:02.260 CC lib/conf/conf.o 00:02:02.260 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.260 CC lib/json/json_parse.o 00:02:02.260 CC lib/vmd/led.o 00:02:02.260 CC lib/vmd/vmd.o 00:02:02.260 CC lib/json/json_util.o 00:02:02.260 CC lib/env_dpdk/env.o 00:02:02.260 CC lib/json/json_write.o 00:02:02.260 CC lib/env_dpdk/memory.o 00:02:02.260 CC lib/env_dpdk/pci.o 00:02:02.260 CC lib/env_dpdk/init.o 00:02:02.260 CC lib/env_dpdk/threads.o 00:02:02.260 CC lib/env_dpdk/pci_ioat.o 00:02:02.260 CC lib/env_dpdk/pci_virtio.o 00:02:02.260 CC lib/idxd/idxd.o 00:02:02.260 CC lib/rdma_utils/rdma_utils.o 00:02:02.260 CC lib/env_dpdk/pci_vmd.o 00:02:02.260 CC lib/idxd/idxd_user.o 00:02:02.260 CC lib/env_dpdk/pci_idxd.o 00:02:02.260 CC lib/idxd/idxd_kernel.o 00:02:02.260 CC lib/env_dpdk/pci_event.o 00:02:02.260 CC lib/env_dpdk/sigbus_handler.o 00:02:02.260 CC lib/env_dpdk/pci_dpdk.o 00:02:02.261 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.261 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:02.519 LIB libspdk_rdma_provider.a 00:02:02.519 LIB libspdk_conf.a 00:02:02.519 SO libspdk_rdma_provider.so.6.0 00:02:02.519 SO libspdk_conf.so.6.0 00:02:02.519 LIB libspdk_rdma_utils.a 00:02:02.519 SYMLINK libspdk_rdma_provider.so 00:02:02.519 SYMLINK libspdk_conf.so 00:02:02.519 SO libspdk_rdma_utils.so.1.0 00:02:02.519 SYMLINK libspdk_rdma_utils.so 00:02:02.778 LIB libspdk_json.a 00:02:02.778 SO libspdk_json.so.6.0 00:02:02.778 LIB libspdk_idxd.a 00:02:02.778 SYMLINK libspdk_json.so 00:02:02.778 SO libspdk_idxd.so.12.0 00:02:02.778 LIB libspdk_vmd.a 00:02:02.778 SYMLINK libspdk_idxd.so 00:02:02.778 SO libspdk_vmd.so.6.0 00:02:03.037 SYMLINK libspdk_vmd.so 00:02:03.037 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.037 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.037 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.037 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.295 LIB libspdk_jsonrpc.a 00:02:03.295 SO libspdk_jsonrpc.so.6.0 00:02:03.553 SYMLINK libspdk_jsonrpc.so 00:02:03.553 LIB libspdk_env_dpdk.a 00:02:03.811 SO libspdk_env_dpdk.so.15.0 00:02:03.811 CC lib/rpc/rpc.o 00:02:03.811 SYMLINK libspdk_env_dpdk.so 00:02:04.070 LIB libspdk_rpc.a 00:02:04.070 SO libspdk_rpc.so.6.0 00:02:04.070 SYMLINK libspdk_rpc.so 00:02:04.637 CC lib/trace/trace.o 00:02:04.637 CC lib/notify/notify.o 00:02:04.637 CC lib/keyring/keyring.o 00:02:04.637 CC lib/trace/trace_flags.o 00:02:04.637 CC lib/notify/notify_rpc.o 00:02:04.637 CC lib/keyring/keyring_rpc.o 00:02:04.637 CC lib/trace/trace_rpc.o 00:02:04.637 LIB libspdk_notify.a 00:02:04.637 SO libspdk_notify.so.6.0 00:02:04.637 LIB libspdk_keyring.a 00:02:04.637 LIB libspdk_trace.a 00:02:04.897 SYMLINK libspdk_notify.so 00:02:04.897 SO libspdk_keyring.so.1.0 00:02:04.897 SO libspdk_trace.so.10.0 00:02:04.897 SYMLINK libspdk_trace.so 00:02:04.897 SYMLINK libspdk_keyring.so 00:02:05.155 CC lib/thread/thread.o 00:02:05.155 CC lib/thread/iobuf.o 00:02:05.155 CC lib/sock/sock.o 00:02:05.155 CC lib/sock/sock_rpc.o 00:02:05.723 LIB libspdk_sock.a 00:02:05.723 SO libspdk_sock.so.10.0 00:02:05.723 SYMLINK libspdk_sock.so 00:02:05.980 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.980 CC lib/nvme/nvme_ctrlr.o 00:02:05.980 CC lib/nvme/nvme_fabric.o 00:02:05.981 CC lib/nvme/nvme_ns_cmd.o 00:02:05.981 CC lib/nvme/nvme_ns.o 00:02:05.981 CC lib/nvme/nvme_pcie_common.o 00:02:05.981 CC lib/nvme/nvme_pcie.o 00:02:05.981 CC lib/nvme/nvme_qpair.o 00:02:05.981 CC lib/nvme/nvme.o 00:02:05.981 CC lib/nvme/nvme_quirks.o 00:02:05.981 CC lib/nvme/nvme_discovery.o 00:02:05.981 CC lib/nvme/nvme_transport.o 00:02:05.981 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.981 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.981 CC lib/nvme/nvme_tcp.o 00:02:05.981 CC lib/nvme/nvme_opal.o 00:02:05.981 CC lib/nvme/nvme_io_msg.o 00:02:05.981 CC lib/nvme/nvme_poll_group.o 00:02:05.981 CC lib/nvme/nvme_zns.o 00:02:05.981 CC lib/nvme/nvme_stubs.o 00:02:05.981 CC lib/nvme/nvme_auth.o 00:02:05.981 CC lib/nvme/nvme_cuse.o 00:02:05.981 CC lib/nvme/nvme_vfio_user.o 00:02:05.981 CC lib/nvme/nvme_rdma.o 00:02:06.545 LIB libspdk_thread.a 00:02:06.803 SO libspdk_thread.so.10.1 00:02:06.803 SYMLINK libspdk_thread.so 00:02:07.060 CC lib/init/json_config.o 00:02:07.060 CC lib/init/subsystem.o 00:02:07.060 CC lib/init/subsystem_rpc.o 00:02:07.060 CC lib/init/rpc.o 00:02:07.060 CC lib/blob/blobstore.o 00:02:07.060 CC lib/blob/request.o 00:02:07.060 CC lib/blob/zeroes.o 00:02:07.060 CC lib/blob/blob_bs_dev.o 00:02:07.060 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.060 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.060 CC lib/virtio/virtio.o 00:02:07.060 CC lib/virtio/virtio_vhost_user.o 00:02:07.060 CC lib/virtio/virtio_vfio_user.o 00:02:07.060 CC lib/virtio/virtio_pci.o 00:02:07.060 CC lib/accel/accel.o 00:02:07.060 CC lib/accel/accel_rpc.o 00:02:07.060 CC lib/accel/accel_sw.o 00:02:07.317 LIB libspdk_init.a 00:02:07.317 SO libspdk_init.so.5.0 00:02:07.317 LIB libspdk_vfu_tgt.a 00:02:07.317 LIB libspdk_virtio.a 00:02:07.317 SO libspdk_vfu_tgt.so.3.0 00:02:07.317 SYMLINK libspdk_init.so 00:02:07.574 SO libspdk_virtio.so.7.0 00:02:07.574 SYMLINK libspdk_vfu_tgt.so 00:02:07.574 SYMLINK libspdk_virtio.so 00:02:07.832 CC lib/event/app.o 00:02:07.832 CC lib/event/reactor.o 00:02:07.832 CC lib/event/log_rpc.o 00:02:07.832 CC lib/event/app_rpc.o 00:02:07.832 CC lib/event/scheduler_static.o 00:02:08.089 LIB libspdk_accel.a 00:02:08.089 SO libspdk_accel.so.15.1 00:02:08.089 LIB libspdk_event.a 00:02:08.089 SYMLINK libspdk_accel.so 00:02:08.089 SO libspdk_event.so.14.0 00:02:08.347 SYMLINK libspdk_event.so 00:02:08.604 CC lib/bdev/bdev.o 00:02:08.604 CC lib/bdev/bdev_rpc.o 00:02:08.604 CC lib/bdev/bdev_zone.o 00:02:08.604 CC lib/bdev/part.o 00:02:08.604 CC lib/bdev/scsi_nvme.o 00:02:09.979 LIB libspdk_blob.a 00:02:10.238 SO libspdk_blob.so.11.0 00:02:10.238 SYMLINK libspdk_blob.so 00:02:10.497 LIB libspdk_nvme.a 00:02:10.497 CC lib/blobfs/blobfs.o 00:02:10.497 CC lib/blobfs/tree.o 00:02:10.497 CC lib/lvol/lvol.o 00:02:10.497 SO libspdk_nvme.so.13.1 00:02:10.755 SYMLINK libspdk_nvme.so 00:02:11.015 LIB libspdk_bdev.a 00:02:11.274 SO libspdk_bdev.so.15.1 00:02:11.274 SYMLINK libspdk_bdev.so 00:02:11.532 LIB libspdk_lvol.a 00:02:11.532 SO libspdk_lvol.so.10.0 00:02:11.532 SYMLINK libspdk_lvol.so 00:02:11.532 CC lib/nbd/nbd.o 00:02:11.532 CC lib/nbd/nbd_rpc.o 00:02:11.532 CC lib/ublk/ublk.o 00:02:11.532 CC lib/ftl/ftl_core.o 00:02:11.532 CC lib/ublk/ublk_rpc.o 00:02:11.532 CC lib/ftl/ftl_init.o 00:02:11.532 CC lib/ftl/ftl_layout.o 00:02:11.532 CC lib/ftl/ftl_debug.o 00:02:11.532 CC lib/ftl/ftl_io.o 00:02:11.532 CC lib/ftl/ftl_sb.o 00:02:11.532 CC lib/ftl/ftl_l2p.o 00:02:11.532 CC lib/nvmf/ctrlr.o 00:02:11.533 CC lib/nvmf/ctrlr_discovery.o 00:02:11.533 CC lib/ftl/ftl_l2p_flat.o 00:02:11.533 CC lib/scsi/dev.o 00:02:11.533 CC lib/nvmf/ctrlr_bdev.o 00:02:11.533 CC lib/nvmf/subsystem.o 00:02:11.533 CC lib/ftl/ftl_nv_cache.o 00:02:11.533 CC lib/nvmf/nvmf.o 00:02:11.533 CC lib/scsi/lun.o 00:02:11.533 CC lib/ftl/ftl_band.o 00:02:11.533 CC lib/scsi/port.o 00:02:11.533 CC lib/ftl/ftl_band_ops.o 00:02:11.533 CC lib/nvmf/nvmf_rpc.o 00:02:11.533 CC lib/scsi/scsi.o 00:02:11.533 CC lib/ftl/ftl_writer.o 00:02:11.533 CC lib/ftl/ftl_rq.o 00:02:11.533 CC lib/scsi/scsi_bdev.o 00:02:11.533 CC lib/nvmf/transport.o 00:02:11.533 CC lib/ftl/ftl_l2p_cache.o 00:02:11.533 CC lib/ftl/ftl_reloc.o 00:02:11.533 CC lib/nvmf/tcp.o 00:02:11.533 CC lib/scsi/scsi_pr.o 00:02:11.533 CC lib/scsi/scsi_rpc.o 00:02:11.533 CC lib/nvmf/stubs.o 00:02:11.533 CC lib/ftl/ftl_p2l.o 00:02:11.533 CC lib/nvmf/mdns_server.o 00:02:11.533 CC lib/scsi/task.o 00:02:11.533 CC lib/nvmf/vfio_user.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt.o 00:02:11.533 CC lib/nvmf/rdma.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:11.533 CC lib/nvmf/auth.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.533 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.533 CC lib/ftl/utils/ftl_conf.o 00:02:11.533 CC lib/ftl/utils/ftl_md.o 00:02:11.533 CC lib/ftl/utils/ftl_mempool.o 00:02:11.533 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.533 CC lib/ftl/utils/ftl_property.o 00:02:11.533 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.533 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.533 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.533 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.533 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.533 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.533 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.533 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.533 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.533 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.533 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.533 CC lib/ftl/base/ftl_base_dev.o 00:02:11.533 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.533 CC lib/ftl/ftl_trace.o 00:02:12.099 LIB libspdk_blobfs.a 00:02:12.099 SO libspdk_blobfs.so.10.0 00:02:12.357 SYMLINK libspdk_blobfs.so 00:02:12.357 LIB libspdk_nbd.a 00:02:12.357 SO libspdk_nbd.so.7.0 00:02:12.357 LIB libspdk_scsi.a 00:02:12.357 SO libspdk_scsi.so.9.0 00:02:12.357 SYMLINK libspdk_nbd.so 00:02:12.357 SYMLINK libspdk_scsi.so 00:02:12.616 LIB libspdk_ublk.a 00:02:12.616 SO libspdk_ublk.so.3.0 00:02:12.616 SYMLINK libspdk_ublk.so 00:02:12.616 CC lib/vhost/vhost.o 00:02:12.616 CC lib/vhost/vhost_scsi.o 00:02:12.616 CC lib/vhost/vhost_rpc.o 00:02:12.616 CC lib/vhost/vhost_blk.o 00:02:12.616 CC lib/vhost/rte_vhost_user.o 00:02:12.616 CC lib/iscsi/conn.o 00:02:12.616 CC lib/iscsi/init_grp.o 00:02:12.616 CC lib/iscsi/iscsi.o 00:02:12.616 CC lib/iscsi/param.o 00:02:12.616 CC lib/iscsi/md5.o 00:02:12.616 CC lib/iscsi/iscsi_subsystem.o 00:02:12.616 CC lib/iscsi/portal_grp.o 00:02:12.616 CC lib/iscsi/tgt_node.o 00:02:12.616 CC lib/iscsi/iscsi_rpc.o 00:02:12.616 CC lib/iscsi/task.o 00:02:12.874 LIB libspdk_ftl.a 00:02:12.874 SO libspdk_ftl.so.9.0 00:02:13.441 SYMLINK libspdk_ftl.so 00:02:14.008 LIB libspdk_vhost.a 00:02:14.008 SO libspdk_vhost.so.8.0 00:02:14.008 SYMLINK libspdk_vhost.so 00:02:14.008 LIB libspdk_iscsi.a 00:02:14.008 SO libspdk_iscsi.so.8.0 00:02:14.267 SYMLINK libspdk_iscsi.so 00:02:14.834 LIB libspdk_nvmf.a 00:02:15.094 SO libspdk_nvmf.so.19.0 00:02:15.094 SYMLINK libspdk_nvmf.so 00:02:15.661 CC module/vfu_device/vfu_virtio.o 00:02:15.661 CC module/vfu_device/vfu_virtio_blk.o 00:02:15.661 CC module/vfu_device/vfu_virtio_scsi.o 00:02:15.661 CC module/vfu_device/vfu_virtio_rpc.o 00:02:15.661 CC module/env_dpdk/env_dpdk_rpc.o 00:02:15.920 CC module/blob/bdev/blob_bdev.o 00:02:15.920 CC module/accel/iaa/accel_iaa.o 00:02:15.920 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:15.920 CC module/scheduler/gscheduler/gscheduler.o 00:02:15.920 CC module/accel/iaa/accel_iaa_rpc.o 00:02:15.920 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:15.920 CC module/keyring/file/keyring.o 00:02:15.920 CC module/sock/posix/posix.o 00:02:15.920 CC module/keyring/file/keyring_rpc.o 00:02:15.920 CC module/accel/dsa/accel_dsa.o 00:02:15.920 CC module/accel/error/accel_error.o 00:02:15.920 CC module/accel/dsa/accel_dsa_rpc.o 00:02:15.920 CC module/accel/error/accel_error_rpc.o 00:02:15.920 CC module/accel/ioat/accel_ioat.o 00:02:15.920 CC module/keyring/linux/keyring.o 00:02:15.920 CC module/accel/ioat/accel_ioat_rpc.o 00:02:15.920 CC module/keyring/linux/keyring_rpc.o 00:02:15.920 LIB libspdk_env_dpdk_rpc.a 00:02:15.920 SO libspdk_env_dpdk_rpc.so.6.0 00:02:15.920 SYMLINK libspdk_env_dpdk_rpc.so 00:02:15.920 LIB libspdk_scheduler_dynamic.a 00:02:16.179 LIB libspdk_keyring_file.a 00:02:16.179 LIB libspdk_accel_error.a 00:02:16.179 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.179 LIB libspdk_scheduler_gscheduler.a 00:02:16.179 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.179 SO libspdk_keyring_file.so.1.0 00:02:16.179 SO libspdk_accel_error.so.2.0 00:02:16.179 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.179 LIB libspdk_accel_ioat.a 00:02:16.179 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.179 LIB libspdk_accel_iaa.a 00:02:16.179 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.179 SO libspdk_accel_ioat.so.6.0 00:02:16.179 SO libspdk_accel_iaa.so.3.0 00:02:16.179 LIB libspdk_blob_bdev.a 00:02:16.179 SYMLINK libspdk_keyring_file.so 00:02:16.179 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.179 SYMLINK libspdk_accel_error.so 00:02:16.179 LIB libspdk_accel_dsa.a 00:02:16.179 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.179 LIB libspdk_keyring_linux.a 00:02:16.179 SO libspdk_blob_bdev.so.11.0 00:02:16.179 SO libspdk_accel_dsa.so.5.0 00:02:16.179 SO libspdk_keyring_linux.so.1.0 00:02:16.179 SYMLINK libspdk_accel_ioat.so 00:02:16.179 SYMLINK libspdk_accel_iaa.so 00:02:16.179 SYMLINK libspdk_blob_bdev.so 00:02:16.179 SYMLINK libspdk_keyring_linux.so 00:02:16.179 SYMLINK libspdk_accel_dsa.so 00:02:16.438 LIB libspdk_vfu_device.a 00:02:16.438 SO libspdk_vfu_device.so.3.0 00:02:16.438 SYMLINK libspdk_vfu_device.so 00:02:16.697 LIB libspdk_sock_posix.a 00:02:16.697 SO libspdk_sock_posix.so.6.0 00:02:16.697 CC module/bdev/error/vbdev_error.o 00:02:16.697 CC module/bdev/error/vbdev_error_rpc.o 00:02:16.697 CC module/bdev/gpt/gpt.o 00:02:16.697 CC module/bdev/gpt/vbdev_gpt.o 00:02:16.697 CC module/bdev/malloc/bdev_malloc.o 00:02:16.697 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:16.697 CC module/bdev/delay/vbdev_delay.o 00:02:16.697 CC module/bdev/lvol/vbdev_lvol.o 00:02:16.697 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:16.697 SYMLINK libspdk_sock_posix.so 00:02:16.697 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:16.697 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:16.697 CC module/bdev/nvme/bdev_nvme.o 00:02:16.697 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:16.697 CC module/blobfs/bdev/blobfs_bdev.o 00:02:16.697 CC module/bdev/nvme/nvme_rpc.o 00:02:16.697 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:16.697 CC module/bdev/nvme/vbdev_opal.o 00:02:16.697 CC module/bdev/nvme/bdev_mdns_client.o 00:02:16.697 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:16.697 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:16.697 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:16.697 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:16.697 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:16.697 CC module/bdev/ftl/bdev_ftl.o 00:02:16.697 CC module/bdev/null/bdev_null.o 00:02:16.697 CC module/bdev/null/bdev_null_rpc.o 00:02:16.697 CC module/bdev/split/vbdev_split.o 00:02:16.697 CC module/bdev/aio/bdev_aio_rpc.o 00:02:16.697 CC module/bdev/aio/bdev_aio.o 00:02:16.697 CC module/bdev/split/vbdev_split_rpc.o 00:02:16.697 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:16.697 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:16.697 CC module/bdev/iscsi/bdev_iscsi.o 00:02:16.697 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:16.697 CC module/bdev/passthru/vbdev_passthru.o 00:02:16.697 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:16.697 CC module/bdev/raid/bdev_raid.o 00:02:16.697 CC module/bdev/raid/bdev_raid_rpc.o 00:02:16.697 CC module/bdev/raid/raid0.o 00:02:16.697 CC module/bdev/raid/bdev_raid_sb.o 00:02:16.697 CC module/bdev/raid/raid1.o 00:02:16.697 CC module/bdev/raid/concat.o 00:02:16.955 LIB libspdk_blobfs_bdev.a 00:02:16.955 LIB libspdk_bdev_gpt.a 00:02:17.213 SO libspdk_blobfs_bdev.so.6.0 00:02:17.213 SO libspdk_bdev_gpt.so.6.0 00:02:17.213 LIB libspdk_bdev_split.a 00:02:17.213 LIB libspdk_bdev_error.a 00:02:17.213 LIB libspdk_bdev_ftl.a 00:02:17.213 SYMLINK libspdk_blobfs_bdev.so 00:02:17.213 SO libspdk_bdev_split.so.6.0 00:02:17.213 SO libspdk_bdev_error.so.6.0 00:02:17.213 SO libspdk_bdev_ftl.so.6.0 00:02:17.213 SYMLINK libspdk_bdev_gpt.so 00:02:17.213 LIB libspdk_bdev_passthru.a 00:02:17.213 LIB libspdk_bdev_zone_block.a 00:02:17.213 LIB libspdk_bdev_aio.a 00:02:17.213 LIB libspdk_bdev_malloc.a 00:02:17.213 SO libspdk_bdev_passthru.so.6.0 00:02:17.213 SO libspdk_bdev_zone_block.so.6.0 00:02:17.213 SYMLINK libspdk_bdev_split.so 00:02:17.213 LIB libspdk_bdev_delay.a 00:02:17.213 SYMLINK libspdk_bdev_ftl.so 00:02:17.213 SYMLINK libspdk_bdev_error.so 00:02:17.213 LIB libspdk_bdev_iscsi.a 00:02:17.213 SO libspdk_bdev_aio.so.6.0 00:02:17.213 SO libspdk_bdev_malloc.so.6.0 00:02:17.213 SO libspdk_bdev_delay.so.6.0 00:02:17.213 SO libspdk_bdev_iscsi.so.6.0 00:02:17.213 SYMLINK libspdk_bdev_passthru.so 00:02:17.213 SYMLINK libspdk_bdev_zone_block.so 00:02:17.213 LIB libspdk_bdev_null.a 00:02:17.213 SYMLINK libspdk_bdev_aio.so 00:02:17.213 LIB libspdk_bdev_lvol.a 00:02:17.213 SYMLINK libspdk_bdev_malloc.so 00:02:17.471 SO libspdk_bdev_null.so.6.0 00:02:17.471 SYMLINK libspdk_bdev_iscsi.so 00:02:17.471 SO libspdk_bdev_lvol.so.6.0 00:02:17.471 SYMLINK libspdk_bdev_delay.so 00:02:17.471 LIB libspdk_bdev_virtio.a 00:02:17.471 SYMLINK libspdk_bdev_null.so 00:02:17.471 SO libspdk_bdev_virtio.so.6.0 00:02:17.471 SYMLINK libspdk_bdev_lvol.so 00:02:17.471 SYMLINK libspdk_bdev_virtio.so 00:02:17.730 LIB libspdk_bdev_raid.a 00:02:17.988 SO libspdk_bdev_raid.so.6.0 00:02:17.988 SYMLINK libspdk_bdev_raid.so 00:02:19.025 LIB libspdk_bdev_nvme.a 00:02:19.025 SO libspdk_bdev_nvme.so.7.0 00:02:19.284 SYMLINK libspdk_bdev_nvme.so 00:02:19.851 CC module/event/subsystems/vmd/vmd.o 00:02:19.851 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:19.851 CC module/event/subsystems/keyring/keyring.o 00:02:19.851 CC module/event/subsystems/sock/sock.o 00:02:19.851 CC module/event/subsystems/iobuf/iobuf.o 00:02:19.851 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:19.851 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:19.851 CC module/event/subsystems/scheduler/scheduler.o 00:02:19.851 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.109 LIB libspdk_event_keyring.a 00:02:20.109 LIB libspdk_event_vmd.a 00:02:20.109 LIB libspdk_event_sock.a 00:02:20.109 LIB libspdk_event_vhost_blk.a 00:02:20.109 LIB libspdk_event_scheduler.a 00:02:20.109 LIB libspdk_event_iobuf.a 00:02:20.109 LIB libspdk_event_vfu_tgt.a 00:02:20.109 SO libspdk_event_keyring.so.1.0 00:02:20.109 SO libspdk_event_vmd.so.6.0 00:02:20.109 SO libspdk_event_sock.so.5.0 00:02:20.110 SO libspdk_event_scheduler.so.4.0 00:02:20.110 SO libspdk_event_vhost_blk.so.3.0 00:02:20.110 SO libspdk_event_iobuf.so.3.0 00:02:20.110 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.110 SYMLINK libspdk_event_keyring.so 00:02:20.110 SYMLINK libspdk_event_scheduler.so 00:02:20.110 SYMLINK libspdk_event_sock.so 00:02:20.110 SYMLINK libspdk_event_vmd.so 00:02:20.110 SYMLINK libspdk_event_iobuf.so 00:02:20.110 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.110 SYMLINK libspdk_event_vhost_blk.so 00:02:20.678 CC module/event/subsystems/accel/accel.o 00:02:20.678 LIB libspdk_event_accel.a 00:02:20.678 SO libspdk_event_accel.so.6.0 00:02:20.678 SYMLINK libspdk_event_accel.so 00:02:21.246 CC module/event/subsystems/bdev/bdev.o 00:02:21.246 LIB libspdk_event_bdev.a 00:02:21.246 SO libspdk_event_bdev.so.6.0 00:02:21.505 SYMLINK libspdk_event_bdev.so 00:02:21.763 CC module/event/subsystems/nbd/nbd.o 00:02:21.763 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.763 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.763 CC module/event/subsystems/scsi/scsi.o 00:02:21.763 CC module/event/subsystems/ublk/ublk.o 00:02:21.763 LIB libspdk_event_ublk.a 00:02:21.763 LIB libspdk_event_nbd.a 00:02:21.763 LIB libspdk_event_scsi.a 00:02:21.763 SO libspdk_event_ublk.so.3.0 00:02:22.021 SO libspdk_event_nbd.so.6.0 00:02:22.021 SO libspdk_event_scsi.so.6.0 00:02:22.021 SYMLINK libspdk_event_ublk.so 00:02:22.021 LIB libspdk_event_nvmf.a 00:02:22.021 SYMLINK libspdk_event_scsi.so 00:02:22.021 SO libspdk_event_nvmf.so.6.0 00:02:22.021 SYMLINK libspdk_event_nbd.so 00:02:22.021 SYMLINK libspdk_event_nvmf.so 00:02:22.279 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.279 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.538 LIB libspdk_event_vhost_scsi.a 00:02:22.538 LIB libspdk_event_iscsi.a 00:02:22.538 SO libspdk_event_vhost_scsi.so.3.0 00:02:22.538 SO libspdk_event_iscsi.so.6.0 00:02:22.538 SYMLINK libspdk_event_vhost_scsi.so 00:02:22.538 SYMLINK libspdk_event_iscsi.so 00:02:22.797 SO libspdk.so.6.0 00:02:22.797 SYMLINK libspdk.so 00:02:23.055 CC app/spdk_nvme_identify/identify.o 00:02:23.055 CC app/spdk_lspci/spdk_lspci.o 00:02:23.055 CC app/spdk_top/spdk_top.o 00:02:23.055 CC app/trace_record/trace_record.o 00:02:23.055 CXX app/trace/trace.o 00:02:23.055 TEST_HEADER include/spdk/accel.h 00:02:23.055 TEST_HEADER include/spdk/accel_module.h 00:02:23.055 CC test/rpc_client/rpc_client_test.o 00:02:23.055 TEST_HEADER include/spdk/barrier.h 00:02:23.055 TEST_HEADER include/spdk/assert.h 00:02:23.055 TEST_HEADER include/spdk/base64.h 00:02:23.055 TEST_HEADER include/spdk/bdev.h 00:02:23.055 CC app/spdk_nvme_perf/perf.o 00:02:23.055 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.055 TEST_HEADER include/spdk/bdev_module.h 00:02:23.055 TEST_HEADER include/spdk/bit_array.h 00:02:23.055 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.055 TEST_HEADER include/spdk/bit_pool.h 00:02:23.055 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.055 TEST_HEADER include/spdk/blobfs.h 00:02:23.055 TEST_HEADER include/spdk/conf.h 00:02:23.055 TEST_HEADER include/spdk/blob.h 00:02:23.055 TEST_HEADER include/spdk/cpuset.h 00:02:23.055 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.055 TEST_HEADER include/spdk/config.h 00:02:23.055 TEST_HEADER include/spdk/crc16.h 00:02:23.055 TEST_HEADER include/spdk/crc32.h 00:02:23.055 TEST_HEADER include/spdk/crc64.h 00:02:23.055 TEST_HEADER include/spdk/dif.h 00:02:23.055 TEST_HEADER include/spdk/dma.h 00:02:23.055 TEST_HEADER include/spdk/endian.h 00:02:23.055 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.055 TEST_HEADER include/spdk/env.h 00:02:23.055 TEST_HEADER include/spdk/event.h 00:02:23.056 TEST_HEADER include/spdk/fd_group.h 00:02:23.056 TEST_HEADER include/spdk/fd.h 00:02:23.056 TEST_HEADER include/spdk/file.h 00:02:23.056 TEST_HEADER include/spdk/ftl.h 00:02:23.056 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.056 TEST_HEADER include/spdk/hexlify.h 00:02:23.056 TEST_HEADER include/spdk/idxd.h 00:02:23.056 TEST_HEADER include/spdk/histogram_data.h 00:02:23.056 TEST_HEADER include/spdk/init.h 00:02:23.056 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.056 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.056 TEST_HEADER include/spdk/ioat.h 00:02:23.056 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.056 TEST_HEADER include/spdk/json.h 00:02:23.056 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.056 TEST_HEADER include/spdk/keyring.h 00:02:23.056 TEST_HEADER include/spdk/keyring_module.h 00:02:23.056 TEST_HEADER include/spdk/likely.h 00:02:23.056 TEST_HEADER include/spdk/log.h 00:02:23.056 CC app/spdk_dd/spdk_dd.o 00:02:23.056 TEST_HEADER include/spdk/lvol.h 00:02:23.056 TEST_HEADER include/spdk/mmio.h 00:02:23.056 TEST_HEADER include/spdk/memory.h 00:02:23.056 TEST_HEADER include/spdk/nbd.h 00:02:23.056 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.056 TEST_HEADER include/spdk/net.h 00:02:23.056 TEST_HEADER include/spdk/notify.h 00:02:23.056 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.056 TEST_HEADER include/spdk/nvme.h 00:02:23.056 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.056 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.056 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.056 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.056 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.056 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.056 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.056 TEST_HEADER include/spdk/nvmf.h 00:02:23.056 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.056 TEST_HEADER include/spdk/opal_spec.h 00:02:23.056 TEST_HEADER include/spdk/opal.h 00:02:23.056 TEST_HEADER include/spdk/pci_ids.h 00:02:23.056 TEST_HEADER include/spdk/queue.h 00:02:23.056 TEST_HEADER include/spdk/pipe.h 00:02:23.056 TEST_HEADER include/spdk/reduce.h 00:02:23.056 TEST_HEADER include/spdk/rpc.h 00:02:23.056 TEST_HEADER include/spdk/scheduler.h 00:02:23.056 TEST_HEADER include/spdk/scsi.h 00:02:23.056 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.056 TEST_HEADER include/spdk/sock.h 00:02:23.056 TEST_HEADER include/spdk/stdinc.h 00:02:23.056 TEST_HEADER include/spdk/thread.h 00:02:23.056 TEST_HEADER include/spdk/string.h 00:02:23.056 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.056 TEST_HEADER include/spdk/trace.h 00:02:23.324 TEST_HEADER include/spdk/tree.h 00:02:23.324 TEST_HEADER include/spdk/trace_parser.h 00:02:23.324 TEST_HEADER include/spdk/ublk.h 00:02:23.324 TEST_HEADER include/spdk/util.h 00:02:23.324 TEST_HEADER include/spdk/uuid.h 00:02:23.324 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.324 TEST_HEADER include/spdk/version.h 00:02:23.324 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.324 TEST_HEADER include/spdk/vhost.h 00:02:23.324 TEST_HEADER include/spdk/vmd.h 00:02:23.324 TEST_HEADER include/spdk/zipf.h 00:02:23.324 CC app/spdk_tgt/spdk_tgt.o 00:02:23.324 TEST_HEADER include/spdk/xor.h 00:02:23.324 CXX test/cpp_headers/accel.o 00:02:23.324 CXX test/cpp_headers/assert.o 00:02:23.324 CXX test/cpp_headers/accel_module.o 00:02:23.324 CXX test/cpp_headers/base64.o 00:02:23.324 CXX test/cpp_headers/barrier.o 00:02:23.324 CXX test/cpp_headers/bdev.o 00:02:23.324 CXX test/cpp_headers/bdev_module.o 00:02:23.324 CXX test/cpp_headers/bdev_zone.o 00:02:23.324 CXX test/cpp_headers/bit_pool.o 00:02:23.324 CXX test/cpp_headers/blob_bdev.o 00:02:23.324 CXX test/cpp_headers/bit_array.o 00:02:23.324 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.324 CXX test/cpp_headers/blobfs.o 00:02:23.324 CXX test/cpp_headers/conf.o 00:02:23.324 CXX test/cpp_headers/blob.o 00:02:23.324 CXX test/cpp_headers/config.o 00:02:23.325 CXX test/cpp_headers/cpuset.o 00:02:23.325 CXX test/cpp_headers/crc16.o 00:02:23.325 CXX test/cpp_headers/crc32.o 00:02:23.325 CXX test/cpp_headers/crc64.o 00:02:23.325 CXX test/cpp_headers/dif.o 00:02:23.325 CXX test/cpp_headers/env_dpdk.o 00:02:23.325 CXX test/cpp_headers/dma.o 00:02:23.325 CXX test/cpp_headers/endian.o 00:02:23.325 CXX test/cpp_headers/event.o 00:02:23.325 CXX test/cpp_headers/env.o 00:02:23.325 CXX test/cpp_headers/fd.o 00:02:23.325 CXX test/cpp_headers/file.o 00:02:23.325 CXX test/cpp_headers/ftl.o 00:02:23.325 CXX test/cpp_headers/fd_group.o 00:02:23.325 CXX test/cpp_headers/hexlify.o 00:02:23.325 CXX test/cpp_headers/gpt_spec.o 00:02:23.325 CXX test/cpp_headers/histogram_data.o 00:02:23.325 CXX test/cpp_headers/idxd_spec.o 00:02:23.325 CXX test/cpp_headers/idxd.o 00:02:23.325 CXX test/cpp_headers/init.o 00:02:23.325 CXX test/cpp_headers/ioat_spec.o 00:02:23.325 CXX test/cpp_headers/json.o 00:02:23.325 CXX test/cpp_headers/iscsi_spec.o 00:02:23.325 CXX test/cpp_headers/ioat.o 00:02:23.325 CC app/nvmf_tgt/nvmf_main.o 00:02:23.325 CXX test/cpp_headers/jsonrpc.o 00:02:23.325 CXX test/cpp_headers/keyring_module.o 00:02:23.325 CXX test/cpp_headers/keyring.o 00:02:23.325 CXX test/cpp_headers/likely.o 00:02:23.325 CXX test/cpp_headers/lvol.o 00:02:23.325 CXX test/cpp_headers/log.o 00:02:23.325 CXX test/cpp_headers/mmio.o 00:02:23.325 CXX test/cpp_headers/memory.o 00:02:23.325 CXX test/cpp_headers/nbd.o 00:02:23.325 CXX test/cpp_headers/net.o 00:02:23.325 CXX test/cpp_headers/nvme.o 00:02:23.325 CXX test/cpp_headers/notify.o 00:02:23.325 CXX test/cpp_headers/nvme_spec.o 00:02:23.325 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.325 CXX test/cpp_headers/nvme_intel.o 00:02:23.325 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.325 CXX test/cpp_headers/nvme_zns.o 00:02:23.325 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.325 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.325 CXX test/cpp_headers/nvmf.o 00:02:23.325 CXX test/cpp_headers/opal.o 00:02:23.325 CXX test/cpp_headers/nvmf_spec.o 00:02:23.325 CXX test/cpp_headers/nvmf_transport.o 00:02:23.325 CXX test/cpp_headers/opal_spec.o 00:02:23.325 CXX test/cpp_headers/pci_ids.o 00:02:23.325 CXX test/cpp_headers/pipe.o 00:02:23.325 CXX test/cpp_headers/queue.o 00:02:23.325 CXX test/cpp_headers/reduce.o 00:02:23.325 CXX test/cpp_headers/rpc.o 00:02:23.325 CXX test/cpp_headers/scheduler.o 00:02:23.325 CXX test/cpp_headers/scsi.o 00:02:23.325 CXX test/cpp_headers/scsi_spec.o 00:02:23.325 CXX test/cpp_headers/sock.o 00:02:23.325 CXX test/cpp_headers/stdinc.o 00:02:23.325 CXX test/cpp_headers/string.o 00:02:23.325 CXX test/cpp_headers/thread.o 00:02:23.325 CXX test/cpp_headers/trace.o 00:02:23.325 CXX test/cpp_headers/trace_parser.o 00:02:23.325 CXX test/cpp_headers/tree.o 00:02:23.325 CC examples/util/zipf/zipf.o 00:02:23.325 CXX test/cpp_headers/ublk.o 00:02:23.325 CXX test/cpp_headers/util.o 00:02:23.325 CXX test/cpp_headers/uuid.o 00:02:23.325 CXX test/cpp_headers/version.o 00:02:23.325 CC test/env/memory/memory_ut.o 00:02:23.325 CC test/env/vtophys/vtophys.o 00:02:23.325 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.325 CC test/env/pci/pci_ut.o 00:02:23.325 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.325 CC app/fio/nvme/fio_plugin.o 00:02:23.325 CC examples/ioat/verify/verify.o 00:02:23.325 CC test/thread/poller_perf/poller_perf.o 00:02:23.325 CC test/app/histogram_perf/histogram_perf.o 00:02:23.325 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.600 CC test/app/jsoncat/jsoncat.o 00:02:23.600 CC examples/ioat/perf/perf.o 00:02:23.600 CC test/dma/test_dma/test_dma.o 00:02:23.600 CC test/app/stub/stub.o 00:02:23.600 CC app/fio/bdev/fio_plugin.o 00:02:23.600 LINK spdk_lspci 00:02:23.600 CC test/app/bdev_svc/bdev_svc.o 00:02:23.864 LINK rpc_client_test 00:02:24.121 LINK interrupt_tgt 00:02:24.121 LINK spdk_trace_record 00:02:24.121 CXX test/cpp_headers/vhost.o 00:02:24.121 CXX test/cpp_headers/vmd.o 00:02:24.121 CXX test/cpp_headers/xor.o 00:02:24.121 LINK spdk_nvme_discover 00:02:24.121 CXX test/cpp_headers/zipf.o 00:02:24.121 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.121 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.121 LINK zipf 00:02:24.122 LINK nvmf_tgt 00:02:24.122 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.122 LINK jsoncat 00:02:24.122 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.122 LINK vtophys 00:02:24.122 LINK env_dpdk_post_init 00:02:24.122 LINK iscsi_tgt 00:02:24.122 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.122 LINK spdk_tgt 00:02:24.122 LINK poller_perf 00:02:24.122 LINK histogram_perf 00:02:24.122 LINK ioat_perf 00:02:24.379 LINK spdk_trace 00:02:24.379 LINK stub 00:02:24.379 LINK bdev_svc 00:02:24.379 LINK verify 00:02:24.379 LINK spdk_dd 00:02:24.379 LINK pci_ut 00:02:24.637 LINK test_dma 00:02:24.637 LINK spdk_nvme 00:02:24.637 LINK nvme_fuzz 00:02:24.637 LINK spdk_bdev 00:02:24.637 LINK spdk_nvme_identify 00:02:24.637 LINK spdk_nvme_perf 00:02:24.637 CC app/vhost/vhost.o 00:02:24.637 CC test/event/reactor_perf/reactor_perf.o 00:02:24.637 CC test/event/event_perf/event_perf.o 00:02:24.637 CC examples/sock/hello_world/hello_sock.o 00:02:24.637 CC examples/vmd/led/led.o 00:02:24.637 CC test/event/reactor/reactor.o 00:02:24.637 CC examples/idxd/perf/perf.o 00:02:24.637 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.637 LINK vhost_fuzz 00:02:24.637 CC test/event/app_repeat/app_repeat.o 00:02:24.637 CC examples/thread/thread/thread_ex.o 00:02:24.637 CC test/event/scheduler/scheduler.o 00:02:24.895 LINK reactor_perf 00:02:24.895 LINK event_perf 00:02:24.895 LINK lsvmd 00:02:24.895 LINK mem_callbacks 00:02:24.895 LINK led 00:02:24.895 LINK reactor 00:02:24.895 LINK spdk_top 00:02:24.895 LINK vhost 00:02:24.895 LINK app_repeat 00:02:24.895 LINK hello_sock 00:02:25.153 LINK scheduler 00:02:25.153 CC test/nvme/aer/aer.o 00:02:25.153 CC test/nvme/boot_partition/boot_partition.o 00:02:25.153 CC test/nvme/e2edp/nvme_dp.o 00:02:25.153 CC test/nvme/simple_copy/simple_copy.o 00:02:25.153 CC test/nvme/sgl/sgl.o 00:02:25.153 CC test/nvme/overhead/overhead.o 00:02:25.153 LINK idxd_perf 00:02:25.153 CC test/nvme/fdp/fdp.o 00:02:25.153 LINK thread 00:02:25.153 CC test/nvme/reset/reset.o 00:02:25.153 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.153 CC test/nvme/compliance/nvme_compliance.o 00:02:25.153 CC test/nvme/cuse/cuse.o 00:02:25.153 CC test/nvme/err_injection/err_injection.o 00:02:25.153 CC test/nvme/reserve/reserve.o 00:02:25.153 CC test/nvme/startup/startup.o 00:02:25.153 CC test/nvme/connect_stress/connect_stress.o 00:02:25.153 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.153 CC test/blobfs/mkfs/mkfs.o 00:02:25.153 CC test/accel/dif/dif.o 00:02:25.153 CC test/lvol/esnap/esnap.o 00:02:25.153 LINK boot_partition 00:02:25.153 LINK memory_ut 00:02:25.411 LINK startup 00:02:25.411 LINK err_injection 00:02:25.411 LINK connect_stress 00:02:25.411 LINK doorbell_aers 00:02:25.411 LINK simple_copy 00:02:25.411 LINK reserve 00:02:25.411 LINK fused_ordering 00:02:25.411 LINK mkfs 00:02:25.411 LINK reset 00:02:25.411 LINK nvme_dp 00:02:25.411 LINK overhead 00:02:25.411 LINK nvme_compliance 00:02:25.411 LINK sgl 00:02:25.411 LINK fdp 00:02:25.411 CC examples/nvme/hello_world/hello_world.o 00:02:25.411 CC examples/nvme/reconnect/reconnect.o 00:02:25.411 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.411 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.411 CC examples/nvme/hotplug/hotplug.o 00:02:25.411 CC examples/nvme/abort/abort.o 00:02:25.411 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.411 CC examples/nvme/arbitration/arbitration.o 00:02:25.669 LINK dif 00:02:25.669 CC examples/accel/perf/accel_perf.o 00:02:25.669 CC examples/blob/hello_world/hello_blob.o 00:02:25.669 CC examples/blob/cli/blobcli.o 00:02:25.669 LINK aer 00:02:25.669 LINK pmr_persistence 00:02:25.669 LINK cmb_copy 00:02:25.669 LINK hotplug 00:02:25.669 LINK hello_world 00:02:25.927 LINK arbitration 00:02:25.927 LINK reconnect 00:02:25.927 LINK abort 00:02:25.927 LINK hello_blob 00:02:25.927 LINK iscsi_fuzz 00:02:25.927 LINK nvme_manage 00:02:26.186 CC test/bdev/bdevio/bdevio.o 00:02:26.186 LINK blobcli 00:02:26.445 LINK cuse 00:02:26.445 LINK accel_perf 00:02:26.445 LINK bdevio 00:02:27.014 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.014 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.272 LINK hello_bdev 00:02:27.841 LINK bdevperf 00:02:28.409 CC examples/nvmf/nvmf/nvmf.o 00:02:28.667 LINK nvmf 00:02:35.236 LINK esnap 00:02:35.803 00:02:35.803 real 0m59.961s 00:02:35.803 user 8m37.640s 00:02:35.803 sys 4m17.131s 00:02:35.803 00:28:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:35.803 00:28:53 make -- common/autotest_common.sh@10 -- $ set +x 00:02:35.803 ************************************ 00:02:35.803 END TEST make 00:02:35.803 ************************************ 00:02:35.803 00:28:53 -- common/autotest_common.sh@1142 -- $ return 0 00:02:35.803 00:28:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:35.803 00:28:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:35.803 00:28:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:35.803 00:28:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.803 00:28:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:35.803 00:28:53 -- pm/common@44 -- $ pid=2721858 00:02:35.803 00:28:53 -- pm/common@50 -- $ kill -TERM 2721858 00:02:35.803 00:28:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.803 00:28:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:35.803 00:28:53 -- pm/common@44 -- $ pid=2721860 00:02:35.803 00:28:53 -- pm/common@50 -- $ kill -TERM 2721860 00:02:35.803 00:28:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:35.804 00:28:53 -- pm/common@44 -- $ pid=2721861 00:02:35.804 00:28:53 -- pm/common@50 -- $ kill -TERM 2721861 00:02:35.804 00:28:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:35.804 00:28:53 -- pm/common@44 -- $ pid=2721884 00:02:35.804 00:28:53 -- pm/common@50 -- $ sudo -E kill -TERM 2721884 00:02:35.804 00:28:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.804 00:28:53 -- nvmf/common.sh@7 -- # uname -s 00:02:35.804 00:28:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.804 00:28:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.804 00:28:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.804 00:28:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.804 00:28:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.804 00:28:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.804 00:28:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.804 00:28:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.804 00:28:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.804 00:28:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.804 00:28:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:02:35.804 00:28:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:02:35.804 00:28:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.804 00:28:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.804 00:28:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.804 00:28:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:35.804 00:28:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.804 00:28:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.804 00:28:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.804 00:28:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.804 00:28:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.804 00:28:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.804 00:28:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.804 00:28:53 -- paths/export.sh@5 -- # export PATH 00:02:35.804 00:28:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.804 00:28:53 -- nvmf/common.sh@47 -- # : 0 00:02:35.804 00:28:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:35.804 00:28:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:35.804 00:28:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:35.804 00:28:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.804 00:28:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.804 00:28:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:35.804 00:28:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:35.804 00:28:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:35.804 00:28:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.804 00:28:53 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.804 00:28:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.804 00:28:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.804 00:28:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.804 00:28:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.804 00:28:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.804 00:28:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.804 00:28:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.804 00:28:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.804 00:28:53 -- spdk/autotest.sh@48 -- # udevadm_pid=2785491 00:02:35.804 00:28:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:35.804 00:28:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.804 00:28:53 -- pm/common@17 -- # local monitor 00:02:35.804 00:28:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@21 -- # date +%s 00:02:35.804 00:28:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.804 00:28:53 -- pm/common@21 -- # date +%s 00:02:35.804 00:28:53 -- pm/common@25 -- # sleep 1 00:02:35.804 00:28:53 -- pm/common@21 -- # date +%s 00:02:35.804 00:28:53 -- pm/common@21 -- # date +%s 00:02:35.804 00:28:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721082533 00:02:35.804 00:28:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721082533 00:02:35.804 00:28:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721082533 00:02:35.804 00:28:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721082533 00:02:35.804 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721082533_collect-vmstat.pm.log 00:02:35.804 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721082533_collect-cpu-load.pm.log 00:02:35.804 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721082533_collect-cpu-temp.pm.log 00:02:35.804 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721082533_collect-bmc-pm.bmc.pm.log 00:02:36.739 00:28:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.739 00:28:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:36.739 00:28:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:36.739 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:02:36.739 00:28:54 -- spdk/autotest.sh@59 -- # create_test_list 00:02:36.739 00:28:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:36.739 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:02:36.998 00:28:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.998 00:28:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.998 00:28:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.998 00:28:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.998 00:28:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.998 00:28:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:36.998 00:28:54 -- common/autotest_common.sh@1455 -- # uname 00:02:36.998 00:28:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:36.998 00:28:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:36.998 00:28:54 -- common/autotest_common.sh@1475 -- # uname 00:02:36.998 00:28:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:36.998 00:28:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:36.998 00:28:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.998 00:28:54 -- spdk/autotest.sh@72 -- # hash lcov 00:02:36.998 00:28:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.998 00:28:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:36.998 --rc lcov_branch_coverage=1 00:02:36.998 --rc lcov_function_coverage=1 00:02:36.998 --rc genhtml_branch_coverage=1 00:02:36.998 --rc genhtml_function_coverage=1 00:02:36.998 --rc genhtml_legend=1 00:02:36.998 --rc geninfo_all_blocks=1 00:02:36.998 ' 00:02:36.998 00:28:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:36.998 --rc lcov_branch_coverage=1 00:02:36.998 --rc lcov_function_coverage=1 00:02:36.998 --rc genhtml_branch_coverage=1 00:02:36.998 --rc genhtml_function_coverage=1 00:02:36.998 --rc genhtml_legend=1 00:02:36.998 --rc geninfo_all_blocks=1 00:02:36.998 ' 00:02:36.998 00:28:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:36.998 --rc lcov_branch_coverage=1 00:02:36.998 --rc lcov_function_coverage=1 00:02:36.998 --rc genhtml_branch_coverage=1 00:02:36.998 --rc genhtml_function_coverage=1 00:02:36.998 --rc genhtml_legend=1 00:02:36.998 --rc geninfo_all_blocks=1 00:02:36.998 --no-external' 00:02:36.998 00:28:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:36.998 --rc lcov_branch_coverage=1 00:02:36.998 --rc lcov_function_coverage=1 00:02:36.998 --rc genhtml_branch_coverage=1 00:02:36.998 --rc genhtml_function_coverage=1 00:02:36.998 --rc genhtml_legend=1 00:02:36.998 --rc geninfo_all_blocks=1 00:02:36.998 --no-external' 00:02:36.998 00:28:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:36.998 lcov: LCOV version 1.14 00:02:36.998 00:28:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:38.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:38.903 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:38.904 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:39.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:39.163 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:39.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:39.164 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:39.423 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:54.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:12.380 00:29:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:12.380 00:29:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.380 00:29:29 -- common/autotest_common.sh@10 -- # set +x 00:03:12.380 00:29:29 -- spdk/autotest.sh@91 -- # rm -f 00:03:12.380 00:29:29 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.287 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:03:14.287 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.287 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.546 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.546 00:29:32 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:14.546 00:29:32 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:14.546 00:29:32 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:14.546 00:29:32 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:14.546 00:29:32 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:14.546 00:29:32 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:14.546 00:29:32 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:14.546 00:29:32 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.546 00:29:32 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:14.546 00:29:32 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:14.546 00:29:32 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.546 00:29:32 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:14.546 00:29:32 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:14.546 00:29:32 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:14.546 00:29:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.805 No valid GPT data, bailing 00:03:14.805 00:29:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.805 00:29:32 -- scripts/common.sh@391 -- # pt= 00:03:14.805 00:29:32 -- scripts/common.sh@392 -- # return 1 00:03:14.805 00:29:32 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.805 1+0 records in 00:03:14.805 1+0 records out 00:03:14.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235291 s, 446 MB/s 00:03:14.805 00:29:32 -- spdk/autotest.sh@118 -- # sync 00:03:14.805 00:29:32 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.805 00:29:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.805 00:29:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.456 00:29:38 -- spdk/autotest.sh@124 -- # uname -s 00:03:21.456 00:29:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:21.456 00:29:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.456 00:29:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.456 00:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.456 00:29:38 -- common/autotest_common.sh@10 -- # set +x 00:03:21.456 ************************************ 00:03:21.456 START TEST setup.sh 00:03:21.456 ************************************ 00:03:21.456 00:29:38 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:21.456 * Looking for test storage... 00:03:21.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.456 00:29:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:21.456 00:29:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:21.456 00:29:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:21.456 00:29:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.456 00:29:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.456 00:29:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:21.456 ************************************ 00:03:21.456 START TEST acl 00:03:21.456 ************************************ 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:21.456 * Looking for test storage... 00:03:21.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.456 00:29:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:21.456 00:29:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:21.456 00:29:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.456 00:29:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.991 00:29:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:23.991 00:29:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:23.991 00:29:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.991 00:29:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:23.991 00:29:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.991 00:29:41 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.280 Hugepages 00:03:27.280 node hugesize free / total 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:03:27.280 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.280 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:86:00.0 == *:*:*.* ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:27.281 00:29:44 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:27.281 00:29:44 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.281 00:29:44 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.281 00:29:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:27.281 ************************************ 00:03:27.281 START TEST denied 00:03:27.281 ************************************ 00:03:27.281 00:29:44 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:27.281 00:29:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:86:00.0' 00:03:27.281 00:29:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:27.281 00:29:44 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:86:00.0' 00:03:27.281 00:29:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.281 00:29:44 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.569 0000:86:00.0 (8086 0a54): Skipping denied controller at 0000:86:00.0 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:86:00.0 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:86:00.0 ]] 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:86:00.0/driver 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.569 00:29:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.758 00:03:34.758 real 0m7.274s 00:03:34.758 user 0m2.398s 00:03:34.758 sys 0m4.131s 00:03:34.758 00:29:52 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.758 00:29:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:34.758 ************************************ 00:03:34.758 END TEST denied 00:03:34.758 ************************************ 00:03:34.758 00:29:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:34.758 00:29:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.758 00:29:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.758 00:29:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.758 00:29:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.758 ************************************ 00:03:34.758 START TEST allowed 00:03:34.758 ************************************ 00:03:34.758 00:29:52 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:34.758 00:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:86:00.0 00:03:34.758 00:29:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:34.758 00:29:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:86:00.0 .*: nvme -> .*' 00:03:34.758 00:29:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.758 00:29:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.956 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.956 00:29:56 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:38.956 00:29:56 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:38.956 00:29:56 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:38.956 00:29:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.956 00:29:56 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.244 00:03:42.244 real 0m7.279s 00:03:42.244 user 0m2.244s 00:03:42.244 sys 0m4.121s 00:03:42.244 00:29:59 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.244 00:29:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 ************************************ 00:03:42.244 END TEST allowed 00:03:42.244 ************************************ 00:03:42.244 00:29:59 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.244 00:03:42.244 real 0m20.926s 00:03:42.244 user 0m7.090s 00:03:42.244 sys 0m12.390s 00:03:42.244 00:29:59 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.244 00:29:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 ************************************ 00:03:42.244 END TEST acl 00:03:42.244 ************************************ 00:03:42.244 00:29:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.244 00:29:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.244 00:29:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.244 00:29:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.244 00:29:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.244 ************************************ 00:03:42.244 START TEST hugepages 00:03:42.244 ************************************ 00:03:42.244 00:29:59 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.244 * Looking for test storage... 00:03:42.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 69308568 kB' 'MemAvailable: 72958520 kB' 'Buffers: 8400 kB' 'Cached: 14556884 kB' 'SwapCached: 0 kB' 'Active: 11712516 kB' 'Inactive: 3539720 kB' 'Active(anon): 11259692 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 690376 kB' 'Mapped: 216860 kB' 'Shmem: 10572740 kB' 'KReclaimable: 493368 kB' 'Slab: 1160168 kB' 'SReclaimable: 493368 kB' 'SUnreclaim: 666800 kB' 'KernelStack: 22784 kB' 'PageTables: 10116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434752 kB' 'Committed_AS: 12695620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219864 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.244 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.245 00:29:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.245 00:29:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.245 00:29:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.245 00:29:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.245 ************************************ 00:03:42.245 START TEST default_setup 00:03:42.245 ************************************ 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.245 00:29:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.779 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.779 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.038 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.978 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71384072 kB' 'MemAvailable: 75033928 kB' 'Buffers: 8400 kB' 'Cached: 14556992 kB' 'SwapCached: 0 kB' 'Active: 11738132 kB' 'Inactive: 3539720 kB' 'Active(anon): 11285308 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 715716 kB' 'Mapped: 217772 kB' 'Shmem: 10572848 kB' 'KReclaimable: 493272 kB' 'Slab: 1156704 kB' 'SReclaimable: 493272 kB' 'SUnreclaim: 663432 kB' 'KernelStack: 22960 kB' 'PageTables: 9964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220076 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.978 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.979 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71385640 kB' 'MemAvailable: 75035496 kB' 'Buffers: 8400 kB' 'Cached: 14556992 kB' 'SwapCached: 0 kB' 'Active: 11737596 kB' 'Inactive: 3539720 kB' 'Active(anon): 11284772 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 715692 kB' 'Mapped: 217688 kB' 'Shmem: 10572848 kB' 'KReclaimable: 493272 kB' 'Slab: 1156688 kB' 'SReclaimable: 493272 kB' 'SUnreclaim: 663416 kB' 'KernelStack: 23024 kB' 'PageTables: 10168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220060 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.980 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.981 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71385968 kB' 'MemAvailable: 75035824 kB' 'Buffers: 8400 kB' 'Cached: 14557012 kB' 'SwapCached: 0 kB' 'Active: 11737976 kB' 'Inactive: 3539720 kB' 'Active(anon): 11285152 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 715764 kB' 'Mapped: 217680 kB' 'Shmem: 10572868 kB' 'KReclaimable: 493272 kB' 'Slab: 1156624 kB' 'SReclaimable: 493272 kB' 'SUnreclaim: 663352 kB' 'KernelStack: 23056 kB' 'PageTables: 10464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220060 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.982 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.983 nr_hugepages=1024 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.983 resv_hugepages=0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.983 surplus_hugepages=0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.983 anon_hugepages=0 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.983 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71385584 kB' 'MemAvailable: 75035440 kB' 'Buffers: 8400 kB' 'Cached: 14557036 kB' 'SwapCached: 0 kB' 'Active: 11737468 kB' 'Inactive: 3539720 kB' 'Active(anon): 11284644 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 715224 kB' 'Mapped: 217680 kB' 'Shmem: 10572892 kB' 'KReclaimable: 493272 kB' 'Slab: 1156560 kB' 'SReclaimable: 493272 kB' 'SUnreclaim: 663288 kB' 'KernelStack: 22880 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12728936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220108 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.984 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.985 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40690052 kB' 'MemUsed: 7378344 kB' 'SwapCached: 0 kB' 'Active: 3987768 kB' 'Inactive: 228308 kB' 'Active(anon): 3860732 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4038772 kB' 'Mapped: 97064 kB' 'AnonPages: 180664 kB' 'Shmem: 3683428 kB' 'KernelStack: 12536 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122744 kB' 'Slab: 403300 kB' 'SReclaimable: 122744 kB' 'SUnreclaim: 280556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.245 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.246 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.247 node0=1024 expecting 1024 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.247 00:03:46.247 real 0m4.134s 00:03:46.247 user 0m1.324s 00:03:46.247 sys 0m2.049s 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.247 00:30:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:46.247 ************************************ 00:03:46.247 END TEST default_setup 00:03:46.247 ************************************ 00:03:46.247 00:30:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.247 00:30:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:46.247 00:30:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.247 00:30:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.247 00:30:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.247 ************************************ 00:03:46.247 START TEST per_node_1G_alloc 00:03:46.247 ************************************ 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.247 00:30:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.542 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.542 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.542 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71467072 kB' 'MemAvailable: 75116896 kB' 'Buffers: 8400 kB' 'Cached: 14498584 kB' 'SwapCached: 0 kB' 'Active: 11677540 kB' 'Inactive: 3539720 kB' 'Active(anon): 11224716 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 713684 kB' 'Mapped: 161984 kB' 'Shmem: 10514440 kB' 'KReclaimable: 493240 kB' 'Slab: 1157124 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663884 kB' 'KernelStack: 23024 kB' 'PageTables: 9852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12656212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220380 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.542 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.543 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71469212 kB' 'MemAvailable: 75119036 kB' 'Buffers: 8400 kB' 'Cached: 14498588 kB' 'SwapCached: 0 kB' 'Active: 11678100 kB' 'Inactive: 3539720 kB' 'Active(anon): 11225276 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 714192 kB' 'Mapped: 161976 kB' 'Shmem: 10514444 kB' 'KReclaimable: 493240 kB' 'Slab: 1157104 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663864 kB' 'KernelStack: 23216 kB' 'PageTables: 9876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12656232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.544 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71467516 kB' 'MemAvailable: 75117340 kB' 'Buffers: 8400 kB' 'Cached: 14498608 kB' 'SwapCached: 0 kB' 'Active: 11677760 kB' 'Inactive: 3539720 kB' 'Active(anon): 11224936 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 713828 kB' 'Mapped: 161984 kB' 'Shmem: 10514464 kB' 'KReclaimable: 493240 kB' 'Slab: 1157072 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663832 kB' 'KernelStack: 23088 kB' 'PageTables: 9604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12656256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220236 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.545 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.546 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.547 nr_hugepages=1024 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.547 resv_hugepages=0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.547 surplus_hugepages=0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.547 anon_hugepages=0 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.547 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71468148 kB' 'MemAvailable: 75117972 kB' 'Buffers: 8400 kB' 'Cached: 14498628 kB' 'SwapCached: 0 kB' 'Active: 11677268 kB' 'Inactive: 3539720 kB' 'Active(anon): 11224444 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 713384 kB' 'Mapped: 161984 kB' 'Shmem: 10514484 kB' 'KReclaimable: 493240 kB' 'Slab: 1157072 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663832 kB' 'KernelStack: 22944 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12655860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220172 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.548 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41778656 kB' 'MemUsed: 6289740 kB' 'SwapCached: 0 kB' 'Active: 3981776 kB' 'Inactive: 228308 kB' 'Active(anon): 3854740 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4034536 kB' 'Mapped: 42508 kB' 'AnonPages: 178912 kB' 'Shmem: 3679192 kB' 'KernelStack: 12472 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 403360 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 280648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.549 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.550 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29690160 kB' 'MemUsed: 14528048 kB' 'SwapCached: 0 kB' 'Active: 7695216 kB' 'Inactive: 3311412 kB' 'Active(anon): 7369428 kB' 'Inactive(anon): 0 kB' 'Active(file): 325788 kB' 'Inactive(file): 3311412 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10472492 kB' 'Mapped: 119536 kB' 'AnonPages: 534220 kB' 'Shmem: 6835292 kB' 'KernelStack: 10392 kB' 'PageTables: 5580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 370528 kB' 'Slab: 753968 kB' 'SReclaimable: 370528 kB' 'SUnreclaim: 383440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.551 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.552 node0=512 expecting 512 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:49.552 node1=512 expecting 512 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:49.552 00:03:49.552 real 0m3.097s 00:03:49.552 user 0m1.259s 00:03:49.552 sys 0m1.882s 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.552 00:30:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.552 ************************************ 00:03:49.552 END TEST per_node_1G_alloc 00:03:49.552 ************************************ 00:03:49.552 00:30:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:49.552 00:30:07 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:49.552 00:30:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.552 00:30:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.552 00:30:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.552 ************************************ 00:03:49.552 START TEST even_2G_alloc 00:03:49.552 ************************************ 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.552 00:30:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.086 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.086 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.086 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.349 00:30:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.349 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71473628 kB' 'MemAvailable: 75123452 kB' 'Buffers: 8400 kB' 'Cached: 14498736 kB' 'SwapCached: 0 kB' 'Active: 11671928 kB' 'Inactive: 3539720 kB' 'Active(anon): 11219104 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707804 kB' 'Mapped: 162180 kB' 'Shmem: 10514592 kB' 'KReclaimable: 493240 kB' 'Slab: 1157936 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 664696 kB' 'KernelStack: 22736 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12649684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.350 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71474164 kB' 'MemAvailable: 75123988 kB' 'Buffers: 8400 kB' 'Cached: 14498740 kB' 'SwapCached: 0 kB' 'Active: 11671204 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218380 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707072 kB' 'Mapped: 161112 kB' 'Shmem: 10514596 kB' 'KReclaimable: 493240 kB' 'Slab: 1157972 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 664732 kB' 'KernelStack: 22720 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12646464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220056 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.351 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.352 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71476144 kB' 'MemAvailable: 75125968 kB' 'Buffers: 8400 kB' 'Cached: 14498756 kB' 'SwapCached: 0 kB' 'Active: 11670912 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218088 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706928 kB' 'Mapped: 161112 kB' 'Shmem: 10514612 kB' 'KReclaimable: 493240 kB' 'Slab: 1157972 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 664732 kB' 'KernelStack: 22720 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12645964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220072 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.353 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.354 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.355 nr_hugepages=1024 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.355 resv_hugepages=0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.355 surplus_hugepages=0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.355 anon_hugepages=0 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71475892 kB' 'MemAvailable: 75125716 kB' 'Buffers: 8400 kB' 'Cached: 14498780 kB' 'SwapCached: 0 kB' 'Active: 11670852 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218028 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706752 kB' 'Mapped: 161112 kB' 'Shmem: 10514636 kB' 'KReclaimable: 493240 kB' 'Slab: 1157972 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 664732 kB' 'KernelStack: 22720 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12646132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220040 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.355 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.356 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41809980 kB' 'MemUsed: 6258416 kB' 'SwapCached: 0 kB' 'Active: 3976104 kB' 'Inactive: 228308 kB' 'Active(anon): 3849068 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4034660 kB' 'Mapped: 42496 kB' 'AnonPages: 172972 kB' 'Shmem: 3679316 kB' 'KernelStack: 12424 kB' 'PageTables: 3480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 403652 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 280940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.357 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29666228 kB' 'MemUsed: 14551980 kB' 'SwapCached: 0 kB' 'Active: 7694532 kB' 'Inactive: 3311412 kB' 'Active(anon): 7368744 kB' 'Inactive(anon): 0 kB' 'Active(file): 325788 kB' 'Inactive(file): 3311412 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10472548 kB' 'Mapped: 118616 kB' 'AnonPages: 533480 kB' 'Shmem: 6835348 kB' 'KernelStack: 10248 kB' 'PageTables: 5204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 370528 kB' 'Slab: 754320 kB' 'SReclaimable: 370528 kB' 'SUnreclaim: 383792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.358 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.359 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.618 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.619 node0=512 expecting 512 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.619 node1=512 expecting 512 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.619 00:03:52.619 real 0m3.106s 00:03:52.619 user 0m1.191s 00:03:52.619 sys 0m1.955s 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.619 00:30:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.619 ************************************ 00:03:52.619 END TEST even_2G_alloc 00:03:52.619 ************************************ 00:03:52.619 00:30:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.619 00:30:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.619 00:30:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.619 00:30:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.619 00:30:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.619 ************************************ 00:03:52.619 START TEST odd_alloc 00:03:52.619 ************************************ 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.619 00:30:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.154 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.154 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.154 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.154 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.154 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.154 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.154 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.416 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71490224 kB' 'MemAvailable: 75140048 kB' 'Buffers: 8400 kB' 'Cached: 14498904 kB' 'SwapCached: 0 kB' 'Active: 11670948 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218124 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706196 kB' 'Mapped: 161200 kB' 'Shmem: 10514760 kB' 'KReclaimable: 493240 kB' 'Slab: 1156904 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663664 kB' 'KernelStack: 22672 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12645724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220056 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.416 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71491352 kB' 'MemAvailable: 75141176 kB' 'Buffers: 8400 kB' 'Cached: 14498908 kB' 'SwapCached: 0 kB' 'Active: 11670212 kB' 'Inactive: 3539720 kB' 'Active(anon): 11217388 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705936 kB' 'Mapped: 161124 kB' 'Shmem: 10514764 kB' 'KReclaimable: 493240 kB' 'Slab: 1156888 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663648 kB' 'KernelStack: 22672 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12645740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220024 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.417 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.418 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71491976 kB' 'MemAvailable: 75141800 kB' 'Buffers: 8400 kB' 'Cached: 14498924 kB' 'SwapCached: 0 kB' 'Active: 11670224 kB' 'Inactive: 3539720 kB' 'Active(anon): 11217400 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705932 kB' 'Mapped: 161124 kB' 'Shmem: 10514780 kB' 'KReclaimable: 493240 kB' 'Slab: 1156888 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663648 kB' 'KernelStack: 22672 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12645760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220024 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.419 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.420 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.681 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.682 nr_hugepages=1025 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.682 resv_hugepages=0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.682 surplus_hugepages=0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.682 anon_hugepages=0 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71491500 kB' 'MemAvailable: 75141324 kB' 'Buffers: 8400 kB' 'Cached: 14498964 kB' 'SwapCached: 0 kB' 'Active: 11669892 kB' 'Inactive: 3539720 kB' 'Active(anon): 11217068 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 705552 kB' 'Mapped: 161124 kB' 'Shmem: 10514820 kB' 'KReclaimable: 493240 kB' 'Slab: 1156888 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663648 kB' 'KernelStack: 22656 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482304 kB' 'Committed_AS: 12645780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220024 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.682 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.683 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41841140 kB' 'MemUsed: 6227256 kB' 'SwapCached: 0 kB' 'Active: 3974920 kB' 'Inactive: 228308 kB' 'Active(anon): 3847884 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4034712 kB' 'Mapped: 42508 kB' 'AnonPages: 171736 kB' 'Shmem: 3679368 kB' 'KernelStack: 12392 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 402520 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 279808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.684 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 29652244 kB' 'MemUsed: 14565964 kB' 'SwapCached: 0 kB' 'Active: 7695300 kB' 'Inactive: 3311412 kB' 'Active(anon): 7369512 kB' 'Inactive(anon): 0 kB' 'Active(file): 325788 kB' 'Inactive(file): 3311412 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10472656 kB' 'Mapped: 118616 kB' 'AnonPages: 534196 kB' 'Shmem: 6835456 kB' 'KernelStack: 10280 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 370528 kB' 'Slab: 754368 kB' 'SReclaimable: 370528 kB' 'SUnreclaim: 383840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.685 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.686 node0=512 expecting 513 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.686 node1=513 expecting 512 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.686 00:03:55.686 real 0m3.130s 00:03:55.686 user 0m1.283s 00:03:55.686 sys 0m1.890s 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.686 00:30:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.686 ************************************ 00:03:55.686 END TEST odd_alloc 00:03:55.686 ************************************ 00:03:55.686 00:30:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.686 00:30:13 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.686 00:30:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.686 00:30:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.686 00:30:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.686 ************************************ 00:03:55.686 START TEST custom_alloc 00:03:55.686 ************************************ 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.686 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.687 00:30:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.983 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.983 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.983 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70442564 kB' 'MemAvailable: 74092388 kB' 'Buffers: 8400 kB' 'Cached: 14499064 kB' 'SwapCached: 0 kB' 'Active: 11676716 kB' 'Inactive: 3539720 kB' 'Active(anon): 11223892 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 712280 kB' 'Mapped: 162028 kB' 'Shmem: 10514920 kB' 'KReclaimable: 493240 kB' 'Slab: 1156744 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663504 kB' 'KernelStack: 22656 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12652644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220140 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.983 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.984 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70442612 kB' 'MemAvailable: 74092436 kB' 'Buffers: 8400 kB' 'Cached: 14499068 kB' 'SwapCached: 0 kB' 'Active: 11671128 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218304 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706660 kB' 'Mapped: 161560 kB' 'Shmem: 10514924 kB' 'KReclaimable: 493240 kB' 'Slab: 1156708 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663468 kB' 'KernelStack: 22672 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12646548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220072 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.985 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.986 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.987 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70442424 kB' 'MemAvailable: 74092248 kB' 'Buffers: 8400 kB' 'Cached: 14499068 kB' 'SwapCached: 0 kB' 'Active: 11671396 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218572 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706436 kB' 'Mapped: 161140 kB' 'Shmem: 10514924 kB' 'KReclaimable: 493240 kB' 'Slab: 1156828 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663588 kB' 'KernelStack: 22656 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12646568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220072 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.988 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.989 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:58.990 nr_hugepages=1536 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.990 resv_hugepages=0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.990 surplus_hugepages=0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.990 anon_hugepages=0 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 70442172 kB' 'MemAvailable: 74091996 kB' 'Buffers: 8400 kB' 'Cached: 14499124 kB' 'SwapCached: 0 kB' 'Active: 11670884 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218060 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706364 kB' 'Mapped: 161140 kB' 'Shmem: 10514980 kB' 'KReclaimable: 493240 kB' 'Slab: 1156828 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663588 kB' 'KernelStack: 22688 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52959040 kB' 'Committed_AS: 12646592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220072 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.990 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.991 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 41836484 kB' 'MemUsed: 6231912 kB' 'SwapCached: 0 kB' 'Active: 3974664 kB' 'Inactive: 228308 kB' 'Active(anon): 3847628 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4034856 kB' 'Mapped: 42524 kB' 'AnonPages: 171296 kB' 'Shmem: 3679512 kB' 'KernelStack: 12408 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 402628 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 279916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.992 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.993 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44218208 kB' 'MemFree: 28606112 kB' 'MemUsed: 15612096 kB' 'SwapCached: 0 kB' 'Active: 7696084 kB' 'Inactive: 3311412 kB' 'Active(anon): 7370296 kB' 'Inactive(anon): 0 kB' 'Active(file): 325788 kB' 'Inactive(file): 3311412 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10472692 kB' 'Mapped: 118616 kB' 'AnonPages: 534916 kB' 'Shmem: 6835492 kB' 'KernelStack: 10248 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 370528 kB' 'Slab: 754200 kB' 'SReclaimable: 370528 kB' 'SUnreclaim: 383672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.994 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.995 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.996 node0=512 expecting 512 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:58.996 node1=1024 expecting 1024 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:58.996 00:03:58.996 real 0m3.117s 00:03:58.996 user 0m1.272s 00:03:58.996 sys 0m1.889s 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.996 00:30:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.996 ************************************ 00:03:58.996 END TEST custom_alloc 00:03:58.996 ************************************ 00:03:58.996 00:30:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.996 00:30:16 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:58.996 00:30:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.996 00:30:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.996 00:30:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.996 ************************************ 00:03:58.996 START TEST no_shrink_alloc 00:03:58.996 ************************************ 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.996 00:30:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.578 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.578 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.578 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.839 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71480048 kB' 'MemAvailable: 75129872 kB' 'Buffers: 8400 kB' 'Cached: 14499216 kB' 'SwapCached: 0 kB' 'Active: 11673240 kB' 'Inactive: 3539720 kB' 'Active(anon): 11220416 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707916 kB' 'Mapped: 161736 kB' 'Shmem: 10515072 kB' 'KReclaimable: 493240 kB' 'Slab: 1156500 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663260 kB' 'KernelStack: 22656 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12649544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220216 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.839 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71475028 kB' 'MemAvailable: 75124852 kB' 'Buffers: 8400 kB' 'Cached: 14499220 kB' 'SwapCached: 0 kB' 'Active: 11677776 kB' 'Inactive: 3539720 kB' 'Active(anon): 11224952 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 712632 kB' 'Mapped: 162052 kB' 'Shmem: 10515076 kB' 'KReclaimable: 493240 kB' 'Slab: 1156500 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663260 kB' 'KernelStack: 22688 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12653272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220188 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.840 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71481872 kB' 'MemAvailable: 75131696 kB' 'Buffers: 8400 kB' 'Cached: 14499220 kB' 'SwapCached: 0 kB' 'Active: 11671484 kB' 'Inactive: 3539720 kB' 'Active(anon): 11218660 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 706832 kB' 'Mapped: 161488 kB' 'Shmem: 10515076 kB' 'KReclaimable: 493240 kB' 'Slab: 1156520 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663280 kB' 'KernelStack: 22720 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12647040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220184 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.841 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.842 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.102 nr_hugepages=1024 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.102 resv_hugepages=0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.102 surplus_hugepages=0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.102 anon_hugepages=0 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.102 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71481992 kB' 'MemAvailable: 75131816 kB' 'Buffers: 8400 kB' 'Cached: 14499260 kB' 'SwapCached: 0 kB' 'Active: 11672636 kB' 'Inactive: 3539720 kB' 'Active(anon): 11219812 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707936 kB' 'Mapped: 161156 kB' 'Shmem: 10515116 kB' 'KReclaimable: 493240 kB' 'Slab: 1156520 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663280 kB' 'KernelStack: 22704 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12662604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220200 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.103 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40774012 kB' 'MemUsed: 7294384 kB' 'SwapCached: 0 kB' 'Active: 3975004 kB' 'Inactive: 228308 kB' 'Active(anon): 3847968 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4034956 kB' 'Mapped: 42540 kB' 'AnonPages: 171500 kB' 'Shmem: 3679612 kB' 'KernelStack: 12408 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 402456 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 279744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.104 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.105 node0=1024 expecting 1024 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.105 00:30:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.402 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.402 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:05.402 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:05.402 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71494472 kB' 'MemAvailable: 75144296 kB' 'Buffers: 8400 kB' 'Cached: 14499380 kB' 'SwapCached: 0 kB' 'Active: 11672924 kB' 'Inactive: 3539720 kB' 'Active(anon): 11220100 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 708096 kB' 'Mapped: 161164 kB' 'Shmem: 10515236 kB' 'KReclaimable: 493240 kB' 'Slab: 1156808 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663568 kB' 'KernelStack: 22688 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12647780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220152 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.402 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.403 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71495088 kB' 'MemAvailable: 75144912 kB' 'Buffers: 8400 kB' 'Cached: 14499380 kB' 'SwapCached: 0 kB' 'Active: 11672280 kB' 'Inactive: 3539720 kB' 'Active(anon): 11219456 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707448 kB' 'Mapped: 161164 kB' 'Shmem: 10515236 kB' 'KReclaimable: 493240 kB' 'Slab: 1156928 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663688 kB' 'KernelStack: 22672 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12647796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220136 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.404 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.405 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71496916 kB' 'MemAvailable: 75146740 kB' 'Buffers: 8400 kB' 'Cached: 14499400 kB' 'SwapCached: 0 kB' 'Active: 11672304 kB' 'Inactive: 3539720 kB' 'Active(anon): 11219480 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707448 kB' 'Mapped: 161164 kB' 'Shmem: 10515256 kB' 'KReclaimable: 493240 kB' 'Slab: 1156928 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663688 kB' 'KernelStack: 22672 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12647820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220136 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.406 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.407 nr_hugepages=1024 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.407 resv_hugepages=0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.407 surplus_hugepages=0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.407 anon_hugepages=0 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.407 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92286604 kB' 'MemFree: 71497340 kB' 'MemAvailable: 75147164 kB' 'Buffers: 8400 kB' 'Cached: 14499420 kB' 'SwapCached: 0 kB' 'Active: 11672316 kB' 'Inactive: 3539720 kB' 'Active(anon): 11219492 kB' 'Inactive(anon): 0 kB' 'Active(file): 452824 kB' 'Inactive(file): 3539720 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 707448 kB' 'Mapped: 161164 kB' 'Shmem: 10515276 kB' 'KReclaimable: 493240 kB' 'Slab: 1156928 kB' 'SReclaimable: 493240 kB' 'SUnreclaim: 663688 kB' 'KernelStack: 22672 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53483328 kB' 'Committed_AS: 12647840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220136 kB' 'VmallocChunk: 0 kB' 'Percpu: 96320 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4111316 kB' 'DirectMap2M: 30171136 kB' 'DirectMap1G: 67108864 kB' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.408 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48068396 kB' 'MemFree: 40771268 kB' 'MemUsed: 7297128 kB' 'SwapCached: 0 kB' 'Active: 3975372 kB' 'Inactive: 228308 kB' 'Active(anon): 3848336 kB' 'Inactive(anon): 0 kB' 'Active(file): 127036 kB' 'Inactive(file): 228308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4035068 kB' 'Mapped: 42548 kB' 'AnonPages: 171764 kB' 'Shmem: 3679724 kB' 'KernelStack: 12424 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122712 kB' 'Slab: 402468 kB' 'SReclaimable: 122712 kB' 'SUnreclaim: 279756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.409 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.410 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.411 node0=1024 expecting 1024 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.411 00:04:05.411 real 0m6.191s 00:04:05.411 user 0m2.429s 00:04:05.411 sys 0m3.843s 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.411 00:30:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 ************************************ 00:04:05.411 END TEST no_shrink_alloc 00:04:05.411 ************************************ 00:04:05.411 00:30:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:05.411 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.411 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.412 00:30:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.412 00:04:05.412 real 0m23.345s 00:04:05.412 user 0m9.000s 00:04:05.412 sys 0m13.879s 00:04:05.412 00:30:22 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.412 00:30:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.412 ************************************ 00:04:05.412 END TEST hugepages 00:04:05.412 ************************************ 00:04:05.412 00:30:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:05.412 00:30:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.412 00:30:22 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.412 00:30:22 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.412 00:30:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.412 ************************************ 00:04:05.412 START TEST driver 00:04:05.412 ************************************ 00:04:05.412 00:30:22 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:05.412 * Looking for test storage... 00:04:05.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:05.412 00:30:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.412 00:30:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.412 00:30:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.601 00:30:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:09.601 00:30:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.601 00:30:27 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.601 00:30:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.601 ************************************ 00:04:09.601 START TEST guess_driver 00:04:09.601 ************************************ 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 175 > 0 )) 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:09.601 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:09.601 Looking for driver=vfio-pci 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.601 00:30:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.889 00:30:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.458 00:30:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.651 00:04:17.651 real 0m8.180s 00:04:17.651 user 0m2.372s 00:04:17.651 sys 0m4.185s 00:04:17.651 00:30:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.651 00:30:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.651 ************************************ 00:04:17.651 END TEST guess_driver 00:04:17.651 ************************************ 00:04:17.651 00:30:35 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:17.651 00:04:17.651 real 0m12.532s 00:04:17.651 user 0m3.605s 00:04:17.651 sys 0m6.478s 00:04:17.651 00:30:35 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.651 00:30:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.651 ************************************ 00:04:17.651 END TEST driver 00:04:17.651 ************************************ 00:04:17.908 00:30:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.908 00:30:35 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:17.908 00:30:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.908 00:30:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.908 00:30:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.908 ************************************ 00:04:17.908 START TEST devices 00:04:17.908 ************************************ 00:04:17.908 00:30:35 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:17.908 * Looking for test storage... 00:04:17.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:17.908 00:30:35 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:17.908 00:30:35 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:17.908 00:30:35 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.908 00:30:35 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:86:00.0 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\6\:\0\0\.\0* ]] 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:21.198 00:30:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:21.198 No valid GPT data, bailing 00:04:21.198 00:30:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:21.198 00:30:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:21.198 00:30:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:86:00.0 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:21.198 00:30:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.198 00:30:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:21.198 ************************************ 00:04:21.198 START TEST nvme_mount 00:04:21.198 ************************************ 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:21.198 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:21.199 00:30:39 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.576 Creating new GPT entries in memory. 00:04:22.576 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:22.576 other utilities. 00:04:22.576 00:30:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:22.576 00:30:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:22.576 00:30:40 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:22.576 00:30:40 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:22.576 00:30:40 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:23.513 Creating new GPT entries in memory. 00:04:23.513 The operation has completed successfully. 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2821663 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:86:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.513 00:30:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.056 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:26.057 00:30:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.315 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.315 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.575 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:26.575 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:26.575 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.575 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:26.575 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:86:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.834 00:30:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:29.370 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:86:00.0 data@nvme0n1 '' '' 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.630 00:30:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.920 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.921 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.921 00:04:32.921 real 0m11.270s 00:04:32.921 user 0m3.393s 00:04:32.921 sys 0m5.710s 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.921 00:30:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 ************************************ 00:04:32.921 END TEST nvme_mount 00:04:32.921 ************************************ 00:04:32.921 00:30:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:32.921 00:30:50 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:32.921 00:30:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.921 00:30:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.921 00:30:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 ************************************ 00:04:32.921 START TEST dm_mount 00:04:32.921 ************************************ 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.921 00:30:50 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:33.859 Creating new GPT entries in memory. 00:04:33.859 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:33.859 other utilities. 00:04:33.859 00:30:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:33.859 00:30:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.859 00:30:51 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.859 00:30:51 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.859 00:30:51 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:34.797 Creating new GPT entries in memory. 00:04:34.797 The operation has completed successfully. 00:04:34.797 00:30:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:34.797 00:30:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.797 00:30:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.797 00:30:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.797 00:30:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:35.735 The operation has completed successfully. 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2825863 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:86:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.735 00:30:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:86:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:86:00.0 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:86:00.0 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.024 00:30:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:86:00.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.629 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\6\:\0\0\.\0 ]] 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:41.630 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:41.630 00:04:41.630 real 0m8.950s 00:04:41.630 user 0m2.150s 00:04:41.630 sys 0m3.832s 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.630 00:30:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:41.630 ************************************ 00:04:41.630 END TEST dm_mount 00:04:41.630 ************************************ 00:04:41.630 00:30:59 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.630 00:30:59 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.890 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:41.890 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:41.890 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.890 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.890 00:30:59 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:41.890 00:04:41.890 real 0m24.081s 00:04:41.890 user 0m6.886s 00:04:41.890 sys 0m11.933s 00:04:41.890 00:30:59 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.890 00:30:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.890 ************************************ 00:04:41.890 END TEST devices 00:04:41.890 ************************************ 00:04:41.890 00:30:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:41.890 00:04:41.890 real 1m21.267s 00:04:41.890 user 0m26.732s 00:04:41.890 sys 0m44.941s 00:04:41.890 00:30:59 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.890 00:30:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.890 ************************************ 00:04:41.890 END TEST setup.sh 00:04:41.890 ************************************ 00:04:41.890 00:30:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.890 00:30:59 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.178 Hugepages 00:04:45.178 node hugesize free / total 00:04:45.178 node0 1048576kB 0 / 0 00:04:45.178 node0 2048kB 2048 / 2048 00:04:45.178 node1 1048576kB 0 / 0 00:04:45.178 node1 2048kB 0 / 0 00:04:45.178 00:04:45.178 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.178 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.178 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.178 NVMe 0000:86:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:45.178 00:31:02 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.178 00:31:02 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.178 00:31:02 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.178 00:31:02 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.712 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.712 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.977 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.977 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.977 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.977 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.914 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.914 00:31:06 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:49.851 00:31:07 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:49.851 00:31:07 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:49.851 00:31:07 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.851 00:31:07 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:49.851 00:31:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:49.851 00:31:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:49.851 00:31:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.851 00:31:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.851 00:31:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:49.851 00:31:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:49.851 00:31:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:49.851 00:31:07 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.141 Waiting for block devices as requested 00:04:53.141 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.141 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.141 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.141 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.141 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.141 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.141 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.400 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.400 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.400 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.400 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.659 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.659 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.659 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.175 00:31:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:54.175 00:31:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:86:00.0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1502 -- # grep 0000:86:00.0/nvme/nvme 00:04:54.175 00:31:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 ]] 00:04:54.175 00:31:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:85/0000:85:00.0/0000:86:00.0/nvme/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:54.175 00:31:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:54.175 00:31:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:54.175 00:31:11 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:54.175 00:31:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:54.175 00:31:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:54.175 00:31:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:54.175 00:31:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:54.175 00:31:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:54.175 00:31:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:54.175 00:31:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:54.175 00:31:11 -- common/autotest_common.sh@1557 -- # continue 00:04:54.175 00:31:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:54.175 00:31:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.175 00:31:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.175 00:31:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:54.175 00:31:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.175 00:31:11 -- common/autotest_common.sh@10 -- # set +x 00:04:54.175 00:31:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.463 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.463 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.031 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.031 00:31:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.031 00:31:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.031 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:04:58.031 00:31:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.031 00:31:15 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:58.031 00:31:15 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.031 00:31:15 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:58.031 00:31:15 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:58.031 00:31:15 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:58.031 00:31:15 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.031 00:31:15 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.031 00:31:15 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.031 00:31:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.031 00:31:15 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.290 00:31:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.290 00:31:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:04:58.290 00:31:15 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:58.290 00:31:15 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:86:00.0/device 00:04:58.290 00:31:15 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:58.290 00:31:15 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:58.290 00:31:15 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:58.290 00:31:15 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:86:00.0 00:04:58.290 00:31:15 -- common/autotest_common.sh@1592 -- # [[ -z 0000:86:00.0 ]] 00:04:58.290 00:31:15 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2835418 00:04:58.290 00:31:15 -- common/autotest_common.sh@1598 -- # waitforlisten 2835418 00:04:58.290 00:31:15 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.290 00:31:15 -- common/autotest_common.sh@829 -- # '[' -z 2835418 ']' 00:04:58.290 00:31:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.290 00:31:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.290 00:31:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.291 00:31:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.291 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:04:58.291 [2024-07-16 00:31:15.991843] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:04:58.291 [2024-07-16 00:31:15.991903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835418 ] 00:04:58.291 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.291 [2024-07-16 00:31:16.072914] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.549 [2024-07-16 00:31:16.167034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.807 00:31:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.808 00:31:16 -- common/autotest_common.sh@862 -- # return 0 00:04:58.808 00:31:16 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:58.808 00:31:16 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:58.808 00:31:16 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:86:00.0 00:05:02.094 nvme0n1 00:05:02.094 00:31:19 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:02.662 [2024-07-16 00:31:20.247587] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:02.662 request: 00:05:02.662 { 00:05:02.662 "nvme_ctrlr_name": "nvme0", 00:05:02.662 "password": "test", 00:05:02.662 "method": "bdev_nvme_opal_revert", 00:05:02.662 "req_id": 1 00:05:02.662 } 00:05:02.662 Got JSON-RPC error response 00:05:02.662 response: 00:05:02.662 { 00:05:02.662 "code": -32602, 00:05:02.662 "message": "Invalid parameters" 00:05:02.662 } 00:05:02.662 00:31:20 -- common/autotest_common.sh@1604 -- # true 00:05:02.662 00:31:20 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:02.662 00:31:20 -- common/autotest_common.sh@1608 -- # killprocess 2835418 00:05:02.662 00:31:20 -- common/autotest_common.sh@948 -- # '[' -z 2835418 ']' 00:05:02.662 00:31:20 -- common/autotest_common.sh@952 -- # kill -0 2835418 00:05:02.662 00:31:20 -- common/autotest_common.sh@953 -- # uname 00:05:02.662 00:31:20 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.662 00:31:20 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2835418 00:05:02.662 00:31:20 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.662 00:31:20 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.662 00:31:20 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2835418' 00:05:02.662 killing process with pid 2835418 00:05:02.662 00:31:20 -- common/autotest_common.sh@967 -- # kill 2835418 00:05:02.662 00:31:20 -- common/autotest_common.sh@972 -- # wait 2835418 00:05:04.565 00:31:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:04.565 00:31:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:04.565 00:31:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.565 00:31:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:04.565 00:31:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:04.565 00:31:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.565 00:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:04.565 00:31:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:04.565 00:31:21 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.565 00:31:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.565 00:31:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.565 00:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:04.565 ************************************ 00:05:04.565 START TEST env 00:05:04.565 ************************************ 00:05:04.565 00:31:22 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:04.565 * Looking for test storage... 00:05:04.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:04.565 00:31:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.565 00:31:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.565 00:31:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.565 00:31:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.565 ************************************ 00:05:04.565 START TEST env_memory 00:05:04.565 ************************************ 00:05:04.565 00:31:22 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.565 00:05:04.565 00:05:04.565 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.565 http://cunit.sourceforge.net/ 00:05:04.565 00:05:04.565 00:05:04.565 Suite: memory 00:05:04.565 Test: alloc and free memory map ...[2024-07-16 00:31:22.208049] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.565 passed 00:05:04.565 Test: mem map translation ...[2024-07-16 00:31:22.237229] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.565 [2024-07-16 00:31:22.237249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.565 [2024-07-16 00:31:22.237307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.565 [2024-07-16 00:31:22.237321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.565 passed 00:05:04.565 Test: mem map registration ...[2024-07-16 00:31:22.297076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:04.565 [2024-07-16 00:31:22.297094] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:04.565 passed 00:05:04.565 Test: mem map adjacent registrations ...passed 00:05:04.565 00:05:04.565 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.565 suites 1 1 n/a 0 0 00:05:04.565 tests 4 4 4 0 0 00:05:04.565 asserts 152 152 152 0 n/a 00:05:04.565 00:05:04.565 Elapsed time = 0.203 seconds 00:05:04.565 00:05:04.566 real 0m0.216s 00:05:04.566 user 0m0.205s 00:05:04.566 sys 0m0.011s 00:05:04.566 00:31:22 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.566 00:31:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:04.566 ************************************ 00:05:04.566 END TEST env_memory 00:05:04.566 ************************************ 00:05:04.825 00:31:22 env -- common/autotest_common.sh@1142 -- # return 0 00:05:04.825 00:31:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.825 00:31:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.825 00:31:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.825 00:31:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.825 ************************************ 00:05:04.825 START TEST env_vtophys 00:05:04.825 ************************************ 00:05:04.825 00:31:22 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.825 EAL: lib.eal log level changed from notice to debug 00:05:04.825 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.825 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.825 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.825 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.825 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.825 EAL: Detected lcore 5 as core 5 on socket 0 00:05:04.825 EAL: Detected lcore 6 as core 6 on socket 0 00:05:04.825 EAL: Detected lcore 7 as core 8 on socket 0 00:05:04.825 EAL: Detected lcore 8 as core 9 on socket 0 00:05:04.825 EAL: Detected lcore 9 as core 10 on socket 0 00:05:04.825 EAL: Detected lcore 10 as core 11 on socket 0 00:05:04.825 EAL: Detected lcore 11 as core 12 on socket 0 00:05:04.825 EAL: Detected lcore 12 as core 13 on socket 0 00:05:04.825 EAL: Detected lcore 13 as core 14 on socket 0 00:05:04.825 EAL: Detected lcore 14 as core 16 on socket 0 00:05:04.825 EAL: Detected lcore 15 as core 17 on socket 0 00:05:04.825 EAL: Detected lcore 16 as core 18 on socket 0 00:05:04.825 EAL: Detected lcore 17 as core 19 on socket 0 00:05:04.825 EAL: Detected lcore 18 as core 20 on socket 0 00:05:04.825 EAL: Detected lcore 19 as core 21 on socket 0 00:05:04.825 EAL: Detected lcore 20 as core 22 on socket 0 00:05:04.825 EAL: Detected lcore 21 as core 24 on socket 0 00:05:04.825 EAL: Detected lcore 22 as core 25 on socket 0 00:05:04.825 EAL: Detected lcore 23 as core 26 on socket 0 00:05:04.825 EAL: Detected lcore 24 as core 27 on socket 0 00:05:04.825 EAL: Detected lcore 25 as core 28 on socket 0 00:05:04.825 EAL: Detected lcore 26 as core 29 on socket 0 00:05:04.825 EAL: Detected lcore 27 as core 30 on socket 0 00:05:04.825 EAL: Detected lcore 28 as core 0 on socket 1 00:05:04.825 EAL: Detected lcore 29 as core 1 on socket 1 00:05:04.825 EAL: Detected lcore 30 as core 2 on socket 1 00:05:04.825 EAL: Detected lcore 31 as core 3 on socket 1 00:05:04.825 EAL: Detected lcore 32 as core 4 on socket 1 00:05:04.825 EAL: Detected lcore 33 as core 5 on socket 1 00:05:04.825 EAL: Detected lcore 34 as core 6 on socket 1 00:05:04.825 EAL: Detected lcore 35 as core 8 on socket 1 00:05:04.825 EAL: Detected lcore 36 as core 9 on socket 1 00:05:04.825 EAL: Detected lcore 37 as core 10 on socket 1 00:05:04.825 EAL: Detected lcore 38 as core 11 on socket 1 00:05:04.825 EAL: Detected lcore 39 as core 12 on socket 1 00:05:04.825 EAL: Detected lcore 40 as core 13 on socket 1 00:05:04.825 EAL: Detected lcore 41 as core 14 on socket 1 00:05:04.825 EAL: Detected lcore 42 as core 16 on socket 1 00:05:04.825 EAL: Detected lcore 43 as core 17 on socket 1 00:05:04.825 EAL: Detected lcore 44 as core 18 on socket 1 00:05:04.825 EAL: Detected lcore 45 as core 19 on socket 1 00:05:04.825 EAL: Detected lcore 46 as core 20 on socket 1 00:05:04.825 EAL: Detected lcore 47 as core 21 on socket 1 00:05:04.825 EAL: Detected lcore 48 as core 22 on socket 1 00:05:04.825 EAL: Detected lcore 49 as core 24 on socket 1 00:05:04.825 EAL: Detected lcore 50 as core 25 on socket 1 00:05:04.825 EAL: Detected lcore 51 as core 26 on socket 1 00:05:04.825 EAL: Detected lcore 52 as core 27 on socket 1 00:05:04.825 EAL: Detected lcore 53 as core 28 on socket 1 00:05:04.825 EAL: Detected lcore 54 as core 29 on socket 1 00:05:04.825 EAL: Detected lcore 55 as core 30 on socket 1 00:05:04.825 EAL: Detected lcore 56 as core 0 on socket 0 00:05:04.825 EAL: Detected lcore 57 as core 1 on socket 0 00:05:04.825 EAL: Detected lcore 58 as core 2 on socket 0 00:05:04.825 EAL: Detected lcore 59 as core 3 on socket 0 00:05:04.825 EAL: Detected lcore 60 as core 4 on socket 0 00:05:04.825 EAL: Detected lcore 61 as core 5 on socket 0 00:05:04.825 EAL: Detected lcore 62 as core 6 on socket 0 00:05:04.825 EAL: Detected lcore 63 as core 8 on socket 0 00:05:04.825 EAL: Detected lcore 64 as core 9 on socket 0 00:05:04.825 EAL: Detected lcore 65 as core 10 on socket 0 00:05:04.825 EAL: Detected lcore 66 as core 11 on socket 0 00:05:04.825 EAL: Detected lcore 67 as core 12 on socket 0 00:05:04.825 EAL: Detected lcore 68 as core 13 on socket 0 00:05:04.825 EAL: Detected lcore 69 as core 14 on socket 0 00:05:04.825 EAL: Detected lcore 70 as core 16 on socket 0 00:05:04.825 EAL: Detected lcore 71 as core 17 on socket 0 00:05:04.825 EAL: Detected lcore 72 as core 18 on socket 0 00:05:04.825 EAL: Detected lcore 73 as core 19 on socket 0 00:05:04.825 EAL: Detected lcore 74 as core 20 on socket 0 00:05:04.825 EAL: Detected lcore 75 as core 21 on socket 0 00:05:04.825 EAL: Detected lcore 76 as core 22 on socket 0 00:05:04.825 EAL: Detected lcore 77 as core 24 on socket 0 00:05:04.825 EAL: Detected lcore 78 as core 25 on socket 0 00:05:04.825 EAL: Detected lcore 79 as core 26 on socket 0 00:05:04.825 EAL: Detected lcore 80 as core 27 on socket 0 00:05:04.825 EAL: Detected lcore 81 as core 28 on socket 0 00:05:04.825 EAL: Detected lcore 82 as core 29 on socket 0 00:05:04.825 EAL: Detected lcore 83 as core 30 on socket 0 00:05:04.825 EAL: Detected lcore 84 as core 0 on socket 1 00:05:04.825 EAL: Detected lcore 85 as core 1 on socket 1 00:05:04.825 EAL: Detected lcore 86 as core 2 on socket 1 00:05:04.825 EAL: Detected lcore 87 as core 3 on socket 1 00:05:04.825 EAL: Detected lcore 88 as core 4 on socket 1 00:05:04.825 EAL: Detected lcore 89 as core 5 on socket 1 00:05:04.825 EAL: Detected lcore 90 as core 6 on socket 1 00:05:04.825 EAL: Detected lcore 91 as core 8 on socket 1 00:05:04.825 EAL: Detected lcore 92 as core 9 on socket 1 00:05:04.825 EAL: Detected lcore 93 as core 10 on socket 1 00:05:04.825 EAL: Detected lcore 94 as core 11 on socket 1 00:05:04.825 EAL: Detected lcore 95 as core 12 on socket 1 00:05:04.825 EAL: Detected lcore 96 as core 13 on socket 1 00:05:04.825 EAL: Detected lcore 97 as core 14 on socket 1 00:05:04.826 EAL: Detected lcore 98 as core 16 on socket 1 00:05:04.826 EAL: Detected lcore 99 as core 17 on socket 1 00:05:04.826 EAL: Detected lcore 100 as core 18 on socket 1 00:05:04.826 EAL: Detected lcore 101 as core 19 on socket 1 00:05:04.826 EAL: Detected lcore 102 as core 20 on socket 1 00:05:04.826 EAL: Detected lcore 103 as core 21 on socket 1 00:05:04.826 EAL: Detected lcore 104 as core 22 on socket 1 00:05:04.826 EAL: Detected lcore 105 as core 24 on socket 1 00:05:04.826 EAL: Detected lcore 106 as core 25 on socket 1 00:05:04.826 EAL: Detected lcore 107 as core 26 on socket 1 00:05:04.826 EAL: Detected lcore 108 as core 27 on socket 1 00:05:04.826 EAL: Detected lcore 109 as core 28 on socket 1 00:05:04.826 EAL: Detected lcore 110 as core 29 on socket 1 00:05:04.826 EAL: Detected lcore 111 as core 30 on socket 1 00:05:04.826 EAL: Maximum logical cores by configuration: 128 00:05:04.826 EAL: Detected CPU lcores: 112 00:05:04.826 EAL: Detected NUMA nodes: 2 00:05:04.826 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:04.826 EAL: Detected shared linkage of DPDK 00:05:04.826 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.826 EAL: Bus pci wants IOVA as 'DC' 00:05:04.826 EAL: Buses did not request a specific IOVA mode. 00:05:04.826 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:04.826 EAL: Selected IOVA mode 'VA' 00:05:04.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.826 EAL: Probing VFIO support... 00:05:04.826 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.826 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.826 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.826 EAL: VFIO support initialized 00:05:04.826 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.826 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.826 EAL: Setting up physically contiguous memory... 00:05:04.826 EAL: Setting maximum number of open files to 524288 00:05:04.826 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.826 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.826 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.826 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.826 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.826 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.826 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.826 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.826 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.826 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.826 EAL: Hugepages will be freed exactly as allocated. 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: TSC frequency is ~2200000 KHz 00:05:04.826 EAL: Main lcore 0 is ready (tid=7fa43802ea00;cpuset=[0]) 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 0 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.826 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.826 00:05:04.826 00:05:04.826 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.826 http://cunit.sourceforge.net/ 00:05:04.826 00:05:04.826 00:05:04.826 Suite: components_suite 00:05:04.826 Test: vtophys_malloc_test ...passed 00:05:04.826 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.826 EAL: Trying to obtain current memory policy. 00:05:04.826 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.826 EAL: Restoring previous memory policy: 4 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.826 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.826 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.826 EAL: request: mp_malloc_sync 00:05:04.826 EAL: No shared files mode enabled, IPC is disabled 00:05:04.827 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.827 EAL: Trying to obtain current memory policy. 00:05:04.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.827 EAL: Restoring previous memory policy: 4 00:05:04.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.827 EAL: request: mp_malloc_sync 00:05:04.827 EAL: No shared files mode enabled, IPC is disabled 00:05:04.827 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.827 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.827 EAL: request: mp_malloc_sync 00:05:04.827 EAL: No shared files mode enabled, IPC is disabled 00:05:04.827 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.827 EAL: Trying to obtain current memory policy. 00:05:04.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.085 EAL: Restoring previous memory policy: 4 00:05:05.085 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.085 EAL: request: mp_malloc_sync 00:05:05.085 EAL: No shared files mode enabled, IPC is disabled 00:05:05.085 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.085 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.085 EAL: request: mp_malloc_sync 00:05:05.085 EAL: No shared files mode enabled, IPC is disabled 00:05:05.085 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.085 EAL: Trying to obtain current memory policy. 00:05:05.085 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.085 EAL: Restoring previous memory policy: 4 00:05:05.085 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.085 EAL: request: mp_malloc_sync 00:05:05.085 EAL: No shared files mode enabled, IPC is disabled 00:05:05.085 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.344 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.344 EAL: request: mp_malloc_sync 00:05:05.344 EAL: No shared files mode enabled, IPC is disabled 00:05:05.344 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.344 EAL: Trying to obtain current memory policy. 00:05:05.344 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.602 EAL: Restoring previous memory policy: 4 00:05:05.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.602 EAL: request: mp_malloc_sync 00:05:05.602 EAL: No shared files mode enabled, IPC is disabled 00:05:05.602 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.602 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.860 EAL: request: mp_malloc_sync 00:05:05.860 EAL: No shared files mode enabled, IPC is disabled 00:05:05.860 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.860 passed 00:05:05.860 00:05:05.860 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.860 suites 1 1 n/a 0 0 00:05:05.860 tests 2 2 2 0 0 00:05:05.860 asserts 497 497 497 0 n/a 00:05:05.860 00:05:05.860 Elapsed time = 1.017 seconds 00:05:05.860 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.860 EAL: request: mp_malloc_sync 00:05:05.860 EAL: No shared files mode enabled, IPC is disabled 00:05:05.860 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.860 EAL: No shared files mode enabled, IPC is disabled 00:05:05.860 EAL: No shared files mode enabled, IPC is disabled 00:05:05.860 EAL: No shared files mode enabled, IPC is disabled 00:05:05.860 00:05:05.860 real 0m1.156s 00:05:05.860 user 0m0.674s 00:05:05.860 sys 0m0.447s 00:05:05.860 00:31:23 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.860 00:31:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.860 ************************************ 00:05:05.860 END TEST env_vtophys 00:05:05.860 ************************************ 00:05:05.860 00:31:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:05.860 00:31:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.860 00:31:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.860 00:31:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.860 00:31:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.861 ************************************ 00:05:05.861 START TEST env_pci 00:05:05.861 ************************************ 00:05:05.861 00:31:23 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.861 00:05:05.861 00:05:05.861 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.861 http://cunit.sourceforge.net/ 00:05:05.861 00:05:05.861 00:05:05.861 Suite: pci 00:05:05.861 Test: pci_hook ...[2024-07-16 00:31:23.689168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2836928 has claimed it 00:05:06.120 EAL: Cannot find device (10000:00:01.0) 00:05:06.120 EAL: Failed to attach device on primary process 00:05:06.120 passed 00:05:06.120 00:05:06.120 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.120 suites 1 1 n/a 0 0 00:05:06.120 tests 1 1 1 0 0 00:05:06.120 asserts 25 25 25 0 n/a 00:05:06.120 00:05:06.120 Elapsed time = 0.030 seconds 00:05:06.120 00:05:06.120 real 0m0.050s 00:05:06.120 user 0m0.016s 00:05:06.120 sys 0m0.033s 00:05:06.120 00:31:23 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.120 00:31:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.120 ************************************ 00:05:06.120 END TEST env_pci 00:05:06.120 ************************************ 00:05:06.120 00:31:23 env -- common/autotest_common.sh@1142 -- # return 0 00:05:06.120 00:31:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.120 00:31:23 env -- env/env.sh@15 -- # uname 00:05:06.120 00:31:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.120 00:31:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.120 00:31:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.120 00:31:23 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:06.120 00:31:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.120 00:31:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.120 ************************************ 00:05:06.120 START TEST env_dpdk_post_init 00:05:06.120 ************************************ 00:05:06.120 00:31:23 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.120 EAL: Detected CPU lcores: 112 00:05:06.120 EAL: Detected NUMA nodes: 2 00:05:06.120 EAL: Detected shared linkage of DPDK 00:05:06.120 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.120 EAL: Selected IOVA mode 'VA' 00:05:06.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.120 EAL: VFIO support initialized 00:05:06.120 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.120 EAL: Using IOMMU type 1 (Type 1) 00:05:06.120 EAL: Ignore mapping IO port bar(1) 00:05:06.120 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:06.380 EAL: Ignore mapping IO port bar(1) 00:05:06.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:07.317 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:86:00.0 (socket 1) 00:05:10.653 EAL: Releasing PCI mapped resource for 0000:86:00.0 00:05:10.653 EAL: Calling pci_unmap_resource for 0000:86:00.0 at 0x202001040000 00:05:10.653 Starting DPDK initialization... 00:05:10.653 Starting SPDK post initialization... 00:05:10.653 SPDK NVMe probe 00:05:10.653 Attaching to 0000:86:00.0 00:05:10.653 Attached to 0000:86:00.0 00:05:10.653 Cleaning up... 00:05:10.653 00:05:10.653 real 0m4.458s 00:05:10.653 user 0m3.356s 00:05:10.653 sys 0m0.157s 00:05:10.653 00:31:28 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.653 00:31:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 END TEST env_dpdk_post_init 00:05:10.653 ************************************ 00:05:10.653 00:31:28 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.653 00:31:28 env -- env/env.sh@26 -- # uname 00:05:10.653 00:31:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:10.653 00:31:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.653 00:31:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.653 00:31:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.653 00:31:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 START TEST env_mem_callbacks 00:05:10.653 ************************************ 00:05:10.653 00:31:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:10.653 EAL: Detected CPU lcores: 112 00:05:10.653 EAL: Detected NUMA nodes: 2 00:05:10.653 EAL: Detected shared linkage of DPDK 00:05:10.653 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.653 EAL: Selected IOVA mode 'VA' 00:05:10.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.653 EAL: VFIO support initialized 00:05:10.653 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.653 00:05:10.653 00:05:10.653 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.653 http://cunit.sourceforge.net/ 00:05:10.653 00:05:10.653 00:05:10.653 Suite: memory 00:05:10.653 Test: test ... 00:05:10.653 register 0x200000200000 2097152 00:05:10.653 malloc 3145728 00:05:10.653 register 0x200000400000 4194304 00:05:10.653 buf 0x200000500000 len 3145728 PASSED 00:05:10.653 malloc 64 00:05:10.653 buf 0x2000004fff40 len 64 PASSED 00:05:10.653 malloc 4194304 00:05:10.653 register 0x200000800000 6291456 00:05:10.653 buf 0x200000a00000 len 4194304 PASSED 00:05:10.653 free 0x200000500000 3145728 00:05:10.653 free 0x2000004fff40 64 00:05:10.653 unregister 0x200000400000 4194304 PASSED 00:05:10.653 free 0x200000a00000 4194304 00:05:10.653 unregister 0x200000800000 6291456 PASSED 00:05:10.653 malloc 8388608 00:05:10.653 register 0x200000400000 10485760 00:05:10.653 buf 0x200000600000 len 8388608 PASSED 00:05:10.653 free 0x200000600000 8388608 00:05:10.653 unregister 0x200000400000 10485760 PASSED 00:05:10.653 passed 00:05:10.653 00:05:10.653 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.653 suites 1 1 n/a 0 0 00:05:10.653 tests 1 1 1 0 0 00:05:10.653 asserts 15 15 15 0 n/a 00:05:10.653 00:05:10.653 Elapsed time = 0.008 seconds 00:05:10.653 00:05:10.653 real 0m0.060s 00:05:10.653 user 0m0.021s 00:05:10.653 sys 0m0.039s 00:05:10.653 00:31:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.653 00:31:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 END TEST env_mem_callbacks 00:05:10.653 ************************************ 00:05:10.653 00:31:28 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.653 00:05:10.653 real 0m6.385s 00:05:10.653 user 0m4.466s 00:05:10.653 sys 0m0.968s 00:05:10.653 00:31:28 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.653 00:31:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.653 ************************************ 00:05:10.653 END TEST env 00:05:10.653 ************************************ 00:05:10.653 00:31:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.653 00:31:28 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.653 00:31:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.653 00:31:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.653 00:31:28 -- common/autotest_common.sh@10 -- # set +x 00:05:10.911 ************************************ 00:05:10.911 START TEST rpc 00:05:10.911 ************************************ 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:10.911 * Looking for test storage... 00:05:10.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:10.911 00:31:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2837846 00:05:10.911 00:31:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:10.911 00:31:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.911 00:31:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2837846 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@829 -- # '[' -z 2837846 ']' 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.911 00:31:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.911 [2024-07-16 00:31:28.646119] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:10.911 [2024-07-16 00:31:28.646178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837846 ] 00:05:10.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.911 [2024-07-16 00:31:28.731626] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.169 [2024-07-16 00:31:28.822267] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:11.169 [2024-07-16 00:31:28.822309] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2837846' to capture a snapshot of events at runtime. 00:05:11.169 [2024-07-16 00:31:28.822320] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.169 [2024-07-16 00:31:28.822329] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.170 [2024-07-16 00:31:28.822337] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2837846 for offline analysis/debug. 00:05:11.170 [2024-07-16 00:31:28.822361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.736 00:31:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.736 00:31:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:11.736 00:31:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.736 00:31:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.736 00:31:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:11.736 00:31:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:11.736 00:31:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.736 00:31:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.736 00:31:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.736 ************************************ 00:05:11.736 START TEST rpc_integrity 00:05:11.736 ************************************ 00:05:11.736 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:11.736 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.736 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.736 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.737 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.737 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.737 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.995 { 00:05:11.995 "name": "Malloc0", 00:05:11.995 "aliases": [ 00:05:11.995 "a3d2c04b-3894-4439-896f-fb68ee3f3185" 00:05:11.995 ], 00:05:11.995 "product_name": "Malloc disk", 00:05:11.995 "block_size": 512, 00:05:11.995 "num_blocks": 16384, 00:05:11.995 "uuid": "a3d2c04b-3894-4439-896f-fb68ee3f3185", 00:05:11.995 "assigned_rate_limits": { 00:05:11.995 "rw_ios_per_sec": 0, 00:05:11.995 "rw_mbytes_per_sec": 0, 00:05:11.995 "r_mbytes_per_sec": 0, 00:05:11.995 "w_mbytes_per_sec": 0 00:05:11.995 }, 00:05:11.995 "claimed": false, 00:05:11.995 "zoned": false, 00:05:11.995 "supported_io_types": { 00:05:11.995 "read": true, 00:05:11.995 "write": true, 00:05:11.995 "unmap": true, 00:05:11.995 "flush": true, 00:05:11.995 "reset": true, 00:05:11.995 "nvme_admin": false, 00:05:11.995 "nvme_io": false, 00:05:11.995 "nvme_io_md": false, 00:05:11.995 "write_zeroes": true, 00:05:11.995 "zcopy": true, 00:05:11.995 "get_zone_info": false, 00:05:11.995 "zone_management": false, 00:05:11.995 "zone_append": false, 00:05:11.995 "compare": false, 00:05:11.995 "compare_and_write": false, 00:05:11.995 "abort": true, 00:05:11.995 "seek_hole": false, 00:05:11.995 "seek_data": false, 00:05:11.995 "copy": true, 00:05:11.995 "nvme_iov_md": false 00:05:11.995 }, 00:05:11.995 "memory_domains": [ 00:05:11.995 { 00:05:11.995 "dma_device_id": "system", 00:05:11.995 "dma_device_type": 1 00:05:11.995 }, 00:05:11.995 { 00:05:11.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.995 "dma_device_type": 2 00:05:11.995 } 00:05:11.995 ], 00:05:11.995 "driver_specific": {} 00:05:11.995 } 00:05:11.995 ]' 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 [2024-07-16 00:31:29.679810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.995 [2024-07-16 00:31:29.679847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.995 [2024-07-16 00:31:29.679864] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfa1ad0 00:05:11.995 [2024-07-16 00:31:29.679874] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.995 [2024-07-16 00:31:29.681446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.995 [2024-07-16 00:31:29.681473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.995 Passthru0 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.995 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.995 { 00:05:11.995 "name": "Malloc0", 00:05:11.995 "aliases": [ 00:05:11.995 "a3d2c04b-3894-4439-896f-fb68ee3f3185" 00:05:11.995 ], 00:05:11.995 "product_name": "Malloc disk", 00:05:11.995 "block_size": 512, 00:05:11.995 "num_blocks": 16384, 00:05:11.995 "uuid": "a3d2c04b-3894-4439-896f-fb68ee3f3185", 00:05:11.995 "assigned_rate_limits": { 00:05:11.995 "rw_ios_per_sec": 0, 00:05:11.995 "rw_mbytes_per_sec": 0, 00:05:11.995 "r_mbytes_per_sec": 0, 00:05:11.995 "w_mbytes_per_sec": 0 00:05:11.995 }, 00:05:11.995 "claimed": true, 00:05:11.995 "claim_type": "exclusive_write", 00:05:11.995 "zoned": false, 00:05:11.995 "supported_io_types": { 00:05:11.995 "read": true, 00:05:11.995 "write": true, 00:05:11.995 "unmap": true, 00:05:11.995 "flush": true, 00:05:11.995 "reset": true, 00:05:11.995 "nvme_admin": false, 00:05:11.995 "nvme_io": false, 00:05:11.995 "nvme_io_md": false, 00:05:11.995 "write_zeroes": true, 00:05:11.995 "zcopy": true, 00:05:11.995 "get_zone_info": false, 00:05:11.995 "zone_management": false, 00:05:11.995 "zone_append": false, 00:05:11.995 "compare": false, 00:05:11.995 "compare_and_write": false, 00:05:11.995 "abort": true, 00:05:11.995 "seek_hole": false, 00:05:11.995 "seek_data": false, 00:05:11.995 "copy": true, 00:05:11.995 "nvme_iov_md": false 00:05:11.995 }, 00:05:11.995 "memory_domains": [ 00:05:11.995 { 00:05:11.995 "dma_device_id": "system", 00:05:11.995 "dma_device_type": 1 00:05:11.995 }, 00:05:11.995 { 00:05:11.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.995 "dma_device_type": 2 00:05:11.995 } 00:05:11.995 ], 00:05:11.995 "driver_specific": {} 00:05:11.995 }, 00:05:11.995 { 00:05:11.995 "name": "Passthru0", 00:05:11.995 "aliases": [ 00:05:11.995 "5ee6b1f4-0229-504a-9f64-a0c8eff79548" 00:05:11.995 ], 00:05:11.995 "product_name": "passthru", 00:05:11.995 "block_size": 512, 00:05:11.995 "num_blocks": 16384, 00:05:11.995 "uuid": "5ee6b1f4-0229-504a-9f64-a0c8eff79548", 00:05:11.995 "assigned_rate_limits": { 00:05:11.995 "rw_ios_per_sec": 0, 00:05:11.995 "rw_mbytes_per_sec": 0, 00:05:11.995 "r_mbytes_per_sec": 0, 00:05:11.995 "w_mbytes_per_sec": 0 00:05:11.995 }, 00:05:11.995 "claimed": false, 00:05:11.995 "zoned": false, 00:05:11.995 "supported_io_types": { 00:05:11.995 "read": true, 00:05:11.995 "write": true, 00:05:11.995 "unmap": true, 00:05:11.995 "flush": true, 00:05:11.995 "reset": true, 00:05:11.995 "nvme_admin": false, 00:05:11.995 "nvme_io": false, 00:05:11.995 "nvme_io_md": false, 00:05:11.995 "write_zeroes": true, 00:05:11.995 "zcopy": true, 00:05:11.995 "get_zone_info": false, 00:05:11.995 "zone_management": false, 00:05:11.995 "zone_append": false, 00:05:11.995 "compare": false, 00:05:11.995 "compare_and_write": false, 00:05:11.995 "abort": true, 00:05:11.995 "seek_hole": false, 00:05:11.995 "seek_data": false, 00:05:11.995 "copy": true, 00:05:11.995 "nvme_iov_md": false 00:05:11.995 }, 00:05:11.995 "memory_domains": [ 00:05:11.995 { 00:05:11.995 "dma_device_id": "system", 00:05:11.995 "dma_device_type": 1 00:05:11.995 }, 00:05:11.995 { 00:05:11.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.995 "dma_device_type": 2 00:05:11.995 } 00:05:11.995 ], 00:05:11.995 "driver_specific": { 00:05:11.995 "passthru": { 00:05:11.995 "name": "Passthru0", 00:05:11.995 "base_bdev_name": "Malloc0" 00:05:11.995 } 00:05:11.995 } 00:05:11.995 } 00:05:11.995 ]' 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.995 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.996 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.996 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.996 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:11.996 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.996 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.254 00:31:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.254 00:05:12.254 real 0m0.301s 00:05:12.254 user 0m0.188s 00:05:12.254 sys 0m0.045s 00:05:12.254 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.254 00:31:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.254 ************************************ 00:05:12.254 END TEST rpc_integrity 00:05:12.254 ************************************ 00:05:12.254 00:31:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.254 00:31:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:12.254 00:31:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.254 00:31:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.254 00:31:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.254 ************************************ 00:05:12.254 START TEST rpc_plugins 00:05:12.254 ************************************ 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:12.254 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.254 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:12.254 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.254 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.254 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:12.254 { 00:05:12.254 "name": "Malloc1", 00:05:12.254 "aliases": [ 00:05:12.254 "918dc974-d61d-4707-989e-2db767ffc19c" 00:05:12.254 ], 00:05:12.254 "product_name": "Malloc disk", 00:05:12.254 "block_size": 4096, 00:05:12.254 "num_blocks": 256, 00:05:12.254 "uuid": "918dc974-d61d-4707-989e-2db767ffc19c", 00:05:12.254 "assigned_rate_limits": { 00:05:12.254 "rw_ios_per_sec": 0, 00:05:12.254 "rw_mbytes_per_sec": 0, 00:05:12.254 "r_mbytes_per_sec": 0, 00:05:12.254 "w_mbytes_per_sec": 0 00:05:12.254 }, 00:05:12.254 "claimed": false, 00:05:12.254 "zoned": false, 00:05:12.254 "supported_io_types": { 00:05:12.254 "read": true, 00:05:12.254 "write": true, 00:05:12.254 "unmap": true, 00:05:12.254 "flush": true, 00:05:12.254 "reset": true, 00:05:12.254 "nvme_admin": false, 00:05:12.254 "nvme_io": false, 00:05:12.254 "nvme_io_md": false, 00:05:12.254 "write_zeroes": true, 00:05:12.254 "zcopy": true, 00:05:12.254 "get_zone_info": false, 00:05:12.254 "zone_management": false, 00:05:12.254 "zone_append": false, 00:05:12.254 "compare": false, 00:05:12.254 "compare_and_write": false, 00:05:12.254 "abort": true, 00:05:12.254 "seek_hole": false, 00:05:12.254 "seek_data": false, 00:05:12.254 "copy": true, 00:05:12.254 "nvme_iov_md": false 00:05:12.254 }, 00:05:12.254 "memory_domains": [ 00:05:12.254 { 00:05:12.254 "dma_device_id": "system", 00:05:12.254 "dma_device_type": 1 00:05:12.255 }, 00:05:12.255 { 00:05:12.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.255 "dma_device_type": 2 00:05:12.255 } 00:05:12.255 ], 00:05:12.255 "driver_specific": {} 00:05:12.255 } 00:05:12.255 ]' 00:05:12.255 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:12.255 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:12.255 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:12.255 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.255 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.255 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.255 00:31:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:12.255 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.255 00:31:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.255 00:31:30 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.255 00:31:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:12.255 00:31:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:12.255 00:31:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:12.255 00:05:12.255 real 0m0.150s 00:05:12.255 user 0m0.095s 00:05:12.255 sys 0m0.020s 00:05:12.255 00:31:30 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.255 00:31:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:12.255 ************************************ 00:05:12.255 END TEST rpc_plugins 00:05:12.255 ************************************ 00:05:12.255 00:31:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.255 00:31:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:12.255 00:31:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.255 00:31:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.255 00:31:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.514 ************************************ 00:05:12.514 START TEST rpc_trace_cmd_test 00:05:12.514 ************************************ 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:12.514 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2837846", 00:05:12.514 "tpoint_group_mask": "0x8", 00:05:12.514 "iscsi_conn": { 00:05:12.514 "mask": "0x2", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "scsi": { 00:05:12.514 "mask": "0x4", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "bdev": { 00:05:12.514 "mask": "0x8", 00:05:12.514 "tpoint_mask": "0xffffffffffffffff" 00:05:12.514 }, 00:05:12.514 "nvmf_rdma": { 00:05:12.514 "mask": "0x10", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "nvmf_tcp": { 00:05:12.514 "mask": "0x20", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "ftl": { 00:05:12.514 "mask": "0x40", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "blobfs": { 00:05:12.514 "mask": "0x80", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "dsa": { 00:05:12.514 "mask": "0x200", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "thread": { 00:05:12.514 "mask": "0x400", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "nvme_pcie": { 00:05:12.514 "mask": "0x800", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "iaa": { 00:05:12.514 "mask": "0x1000", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "nvme_tcp": { 00:05:12.514 "mask": "0x2000", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "bdev_nvme": { 00:05:12.514 "mask": "0x4000", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 }, 00:05:12.514 "sock": { 00:05:12.514 "mask": "0x8000", 00:05:12.514 "tpoint_mask": "0x0" 00:05:12.514 } 00:05:12.514 }' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:12.514 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:12.773 00:31:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:12.773 00:05:12.773 real 0m0.248s 00:05:12.773 user 0m0.210s 00:05:12.773 sys 0m0.028s 00:05:12.773 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.773 00:31:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 ************************************ 00:05:12.773 END TEST rpc_trace_cmd_test 00:05:12.773 ************************************ 00:05:12.773 00:31:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:12.773 00:31:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:12.773 00:31:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:12.773 00:31:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:12.773 00:31:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.773 00:31:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.773 00:31:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 ************************************ 00:05:12.773 START TEST rpc_daemon_integrity 00:05:12.773 ************************************ 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.773 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.773 { 00:05:12.773 "name": "Malloc2", 00:05:12.773 "aliases": [ 00:05:12.773 "64e7bd6e-5abc-4f67-af55-adde4c0972a0" 00:05:12.773 ], 00:05:12.773 "product_name": "Malloc disk", 00:05:12.773 "block_size": 512, 00:05:12.773 "num_blocks": 16384, 00:05:12.773 "uuid": "64e7bd6e-5abc-4f67-af55-adde4c0972a0", 00:05:12.773 "assigned_rate_limits": { 00:05:12.773 "rw_ios_per_sec": 0, 00:05:12.773 "rw_mbytes_per_sec": 0, 00:05:12.773 "r_mbytes_per_sec": 0, 00:05:12.773 "w_mbytes_per_sec": 0 00:05:12.773 }, 00:05:12.773 "claimed": false, 00:05:12.773 "zoned": false, 00:05:12.773 "supported_io_types": { 00:05:12.773 "read": true, 00:05:12.774 "write": true, 00:05:12.774 "unmap": true, 00:05:12.774 "flush": true, 00:05:12.774 "reset": true, 00:05:12.774 "nvme_admin": false, 00:05:12.774 "nvme_io": false, 00:05:12.774 "nvme_io_md": false, 00:05:12.774 "write_zeroes": true, 00:05:12.774 "zcopy": true, 00:05:12.774 "get_zone_info": false, 00:05:12.774 "zone_management": false, 00:05:12.774 "zone_append": false, 00:05:12.774 "compare": false, 00:05:12.774 "compare_and_write": false, 00:05:12.774 "abort": true, 00:05:12.774 "seek_hole": false, 00:05:12.774 "seek_data": false, 00:05:12.774 "copy": true, 00:05:12.774 "nvme_iov_md": false 00:05:12.774 }, 00:05:12.774 "memory_domains": [ 00:05:12.774 { 00:05:12.774 "dma_device_id": "system", 00:05:12.774 "dma_device_type": 1 00:05:12.774 }, 00:05:12.774 { 00:05:12.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.774 "dma_device_type": 2 00:05:12.774 } 00:05:12.774 ], 00:05:12.774 "driver_specific": {} 00:05:12.774 } 00:05:12.774 ]' 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.774 [2024-07-16 00:31:30.582415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:12.774 [2024-07-16 00:31:30.582451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.774 [2024-07-16 00:31:30.582470] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf93340 00:05:12.774 [2024-07-16 00:31:30.582479] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.774 [2024-07-16 00:31:30.583900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.774 [2024-07-16 00:31:30.583925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.774 Passthru0 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.774 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.032 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.032 { 00:05:13.032 "name": "Malloc2", 00:05:13.032 "aliases": [ 00:05:13.032 "64e7bd6e-5abc-4f67-af55-adde4c0972a0" 00:05:13.032 ], 00:05:13.032 "product_name": "Malloc disk", 00:05:13.032 "block_size": 512, 00:05:13.032 "num_blocks": 16384, 00:05:13.032 "uuid": "64e7bd6e-5abc-4f67-af55-adde4c0972a0", 00:05:13.032 "assigned_rate_limits": { 00:05:13.032 "rw_ios_per_sec": 0, 00:05:13.032 "rw_mbytes_per_sec": 0, 00:05:13.032 "r_mbytes_per_sec": 0, 00:05:13.032 "w_mbytes_per_sec": 0 00:05:13.032 }, 00:05:13.032 "claimed": true, 00:05:13.032 "claim_type": "exclusive_write", 00:05:13.032 "zoned": false, 00:05:13.032 "supported_io_types": { 00:05:13.032 "read": true, 00:05:13.032 "write": true, 00:05:13.032 "unmap": true, 00:05:13.032 "flush": true, 00:05:13.032 "reset": true, 00:05:13.032 "nvme_admin": false, 00:05:13.032 "nvme_io": false, 00:05:13.032 "nvme_io_md": false, 00:05:13.032 "write_zeroes": true, 00:05:13.032 "zcopy": true, 00:05:13.032 "get_zone_info": false, 00:05:13.032 "zone_management": false, 00:05:13.032 "zone_append": false, 00:05:13.032 "compare": false, 00:05:13.032 "compare_and_write": false, 00:05:13.032 "abort": true, 00:05:13.032 "seek_hole": false, 00:05:13.032 "seek_data": false, 00:05:13.032 "copy": true, 00:05:13.032 "nvme_iov_md": false 00:05:13.032 }, 00:05:13.032 "memory_domains": [ 00:05:13.032 { 00:05:13.032 "dma_device_id": "system", 00:05:13.032 "dma_device_type": 1 00:05:13.032 }, 00:05:13.032 { 00:05:13.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.032 "dma_device_type": 2 00:05:13.032 } 00:05:13.032 ], 00:05:13.032 "driver_specific": {} 00:05:13.032 }, 00:05:13.032 { 00:05:13.032 "name": "Passthru0", 00:05:13.032 "aliases": [ 00:05:13.032 "bab0b1f2-224d-5ef8-9206-0e7fcef392a7" 00:05:13.032 ], 00:05:13.032 "product_name": "passthru", 00:05:13.032 "block_size": 512, 00:05:13.032 "num_blocks": 16384, 00:05:13.032 "uuid": "bab0b1f2-224d-5ef8-9206-0e7fcef392a7", 00:05:13.032 "assigned_rate_limits": { 00:05:13.032 "rw_ios_per_sec": 0, 00:05:13.032 "rw_mbytes_per_sec": 0, 00:05:13.032 "r_mbytes_per_sec": 0, 00:05:13.032 "w_mbytes_per_sec": 0 00:05:13.032 }, 00:05:13.032 "claimed": false, 00:05:13.032 "zoned": false, 00:05:13.032 "supported_io_types": { 00:05:13.032 "read": true, 00:05:13.032 "write": true, 00:05:13.032 "unmap": true, 00:05:13.032 "flush": true, 00:05:13.032 "reset": true, 00:05:13.032 "nvme_admin": false, 00:05:13.032 "nvme_io": false, 00:05:13.032 "nvme_io_md": false, 00:05:13.032 "write_zeroes": true, 00:05:13.032 "zcopy": true, 00:05:13.032 "get_zone_info": false, 00:05:13.032 "zone_management": false, 00:05:13.032 "zone_append": false, 00:05:13.032 "compare": false, 00:05:13.032 "compare_and_write": false, 00:05:13.032 "abort": true, 00:05:13.032 "seek_hole": false, 00:05:13.032 "seek_data": false, 00:05:13.032 "copy": true, 00:05:13.032 "nvme_iov_md": false 00:05:13.032 }, 00:05:13.032 "memory_domains": [ 00:05:13.032 { 00:05:13.032 "dma_device_id": "system", 00:05:13.032 "dma_device_type": 1 00:05:13.032 }, 00:05:13.033 { 00:05:13.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.033 "dma_device_type": 2 00:05:13.033 } 00:05:13.033 ], 00:05:13.033 "driver_specific": { 00:05:13.033 "passthru": { 00:05:13.033 "name": "Passthru0", 00:05:13.033 "base_bdev_name": "Malloc2" 00:05:13.033 } 00:05:13.033 } 00:05:13.033 } 00:05:13.033 ]' 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.033 00:05:13.033 real 0m0.297s 00:05:13.033 user 0m0.192s 00:05:13.033 sys 0m0.042s 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.033 00:31:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.033 ************************************ 00:05:13.033 END TEST rpc_daemon_integrity 00:05:13.033 ************************************ 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.033 00:31:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.033 00:31:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2837846 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 2837846 ']' 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@952 -- # kill -0 2837846 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2837846 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2837846' 00:05:13.033 killing process with pid 2837846 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@967 -- # kill 2837846 00:05:13.033 00:31:30 rpc -- common/autotest_common.sh@972 -- # wait 2837846 00:05:13.599 00:05:13.599 real 0m2.654s 00:05:13.599 user 0m3.464s 00:05:13.599 sys 0m0.735s 00:05:13.599 00:31:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.599 00:31:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 ************************************ 00:05:13.599 END TEST rpc 00:05:13.599 ************************************ 00:05:13.599 00:31:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.599 00:31:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.599 00:31:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.599 00:31:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.599 00:31:31 -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 ************************************ 00:05:13.599 START TEST skip_rpc 00:05:13.599 ************************************ 00:05:13.599 00:31:31 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:13.599 * Looking for test storage... 00:05:13.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.599 00:31:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.599 00:31:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:13.599 00:31:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:13.599 00:31:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.599 00:31:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.599 00:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.599 ************************************ 00:05:13.599 START TEST skip_rpc 00:05:13.599 ************************************ 00:05:13.599 00:31:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:13.599 00:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2838542 00:05:13.599 00:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.599 00:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:13.599 00:31:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:13.599 [2024-07-16 00:31:31.406525] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:13.599 [2024-07-16 00:31:31.406578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838542 ] 00:05:13.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.857 [2024-07-16 00:31:31.488235] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.857 [2024-07-16 00:31:31.576062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2838542 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2838542 ']' 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2838542 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2838542 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2838542' 00:05:19.159 killing process with pid 2838542 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2838542 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2838542 00:05:19.159 00:05:19.159 real 0m5.391s 00:05:19.159 user 0m5.124s 00:05:19.159 sys 0m0.292s 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.159 00:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.159 ************************************ 00:05:19.159 END TEST skip_rpc 00:05:19.159 ************************************ 00:05:19.159 00:31:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:19.159 00:31:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.159 00:31:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.159 00:31:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.159 00:31:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.159 ************************************ 00:05:19.159 START TEST skip_rpc_with_json 00:05:19.159 ************************************ 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2839618 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2839618 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2839618 ']' 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.159 00:31:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.159 [2024-07-16 00:31:36.909867] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:19.159 [2024-07-16 00:31:36.909975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839618 ] 00:05:19.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.449 [2024-07-16 00:31:37.028753] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.449 [2024-07-16 00:31:37.120062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 [2024-07-16 00:31:37.343599] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:19.720 request: 00:05:19.720 { 00:05:19.720 "trtype": "tcp", 00:05:19.720 "method": "nvmf_get_transports", 00:05:19.720 "req_id": 1 00:05:19.720 } 00:05:19.720 Got JSON-RPC error response 00:05:19.720 response: 00:05:19.720 { 00:05:19.720 "code": -19, 00:05:19.720 "message": "No such device" 00:05:19.720 } 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 [2024-07-16 00:31:37.351737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.720 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:19.720 { 00:05:19.720 "subsystems": [ 00:05:19.720 { 00:05:19.720 "subsystem": "vfio_user_target", 00:05:19.720 "config": null 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "keyring", 00:05:19.720 "config": [] 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "iobuf", 00:05:19.720 "config": [ 00:05:19.720 { 00:05:19.720 "method": "iobuf_set_options", 00:05:19.720 "params": { 00:05:19.720 "small_pool_count": 8192, 00:05:19.720 "large_pool_count": 1024, 00:05:19.720 "small_bufsize": 8192, 00:05:19.720 "large_bufsize": 135168 00:05:19.720 } 00:05:19.720 } 00:05:19.720 ] 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "sock", 00:05:19.720 "config": [ 00:05:19.720 { 00:05:19.720 "method": "sock_set_default_impl", 00:05:19.720 "params": { 00:05:19.720 "impl_name": "posix" 00:05:19.720 } 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "method": "sock_impl_set_options", 00:05:19.720 "params": { 00:05:19.720 "impl_name": "ssl", 00:05:19.720 "recv_buf_size": 4096, 00:05:19.720 "send_buf_size": 4096, 00:05:19.720 "enable_recv_pipe": true, 00:05:19.720 "enable_quickack": false, 00:05:19.720 "enable_placement_id": 0, 00:05:19.720 "enable_zerocopy_send_server": true, 00:05:19.720 "enable_zerocopy_send_client": false, 00:05:19.720 "zerocopy_threshold": 0, 00:05:19.720 "tls_version": 0, 00:05:19.720 "enable_ktls": false 00:05:19.720 } 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "method": "sock_impl_set_options", 00:05:19.720 "params": { 00:05:19.720 "impl_name": "posix", 00:05:19.720 "recv_buf_size": 2097152, 00:05:19.720 "send_buf_size": 2097152, 00:05:19.720 "enable_recv_pipe": true, 00:05:19.720 "enable_quickack": false, 00:05:19.720 "enable_placement_id": 0, 00:05:19.720 "enable_zerocopy_send_server": true, 00:05:19.720 "enable_zerocopy_send_client": false, 00:05:19.720 "zerocopy_threshold": 0, 00:05:19.720 "tls_version": 0, 00:05:19.720 "enable_ktls": false 00:05:19.720 } 00:05:19.720 } 00:05:19.720 ] 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "vmd", 00:05:19.720 "config": [] 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "accel", 00:05:19.720 "config": [ 00:05:19.720 { 00:05:19.720 "method": "accel_set_options", 00:05:19.720 "params": { 00:05:19.720 "small_cache_size": 128, 00:05:19.720 "large_cache_size": 16, 00:05:19.720 "task_count": 2048, 00:05:19.720 "sequence_count": 2048, 00:05:19.720 "buf_count": 2048 00:05:19.720 } 00:05:19.720 } 00:05:19.720 ] 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "subsystem": "bdev", 00:05:19.720 "config": [ 00:05:19.720 { 00:05:19.720 "method": "bdev_set_options", 00:05:19.720 "params": { 00:05:19.720 "bdev_io_pool_size": 65535, 00:05:19.720 "bdev_io_cache_size": 256, 00:05:19.720 "bdev_auto_examine": true, 00:05:19.720 "iobuf_small_cache_size": 128, 00:05:19.720 "iobuf_large_cache_size": 16 00:05:19.720 } 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "method": "bdev_raid_set_options", 00:05:19.720 "params": { 00:05:19.720 "process_window_size_kb": 1024 00:05:19.720 } 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "method": "bdev_iscsi_set_options", 00:05:19.720 "params": { 00:05:19.720 "timeout_sec": 30 00:05:19.720 } 00:05:19.720 }, 00:05:19.720 { 00:05:19.720 "method": "bdev_nvme_set_options", 00:05:19.720 "params": { 00:05:19.720 "action_on_timeout": "none", 00:05:19.720 "timeout_us": 0, 00:05:19.720 "timeout_admin_us": 0, 00:05:19.720 "keep_alive_timeout_ms": 10000, 00:05:19.720 "arbitration_burst": 0, 00:05:19.720 "low_priority_weight": 0, 00:05:19.720 "medium_priority_weight": 0, 00:05:19.720 "high_priority_weight": 0, 00:05:19.720 "nvme_adminq_poll_period_us": 10000, 00:05:19.720 "nvme_ioq_poll_period_us": 0, 00:05:19.720 "io_queue_requests": 0, 00:05:19.720 "delay_cmd_submit": true, 00:05:19.720 "transport_retry_count": 4, 00:05:19.720 "bdev_retry_count": 3, 00:05:19.720 "transport_ack_timeout": 0, 00:05:19.720 "ctrlr_loss_timeout_sec": 0, 00:05:19.720 "reconnect_delay_sec": 0, 00:05:19.720 "fast_io_fail_timeout_sec": 0, 00:05:19.720 "disable_auto_failback": false, 00:05:19.720 "generate_uuids": false, 00:05:19.721 "transport_tos": 0, 00:05:19.721 "nvme_error_stat": false, 00:05:19.721 "rdma_srq_size": 0, 00:05:19.721 "io_path_stat": false, 00:05:19.721 "allow_accel_sequence": false, 00:05:19.721 "rdma_max_cq_size": 0, 00:05:19.721 "rdma_cm_event_timeout_ms": 0, 00:05:19.721 "dhchap_digests": [ 00:05:19.721 "sha256", 00:05:19.721 "sha384", 00:05:19.721 "sha512" 00:05:19.721 ], 00:05:19.721 "dhchap_dhgroups": [ 00:05:19.721 "null", 00:05:19.721 "ffdhe2048", 00:05:19.721 "ffdhe3072", 00:05:19.721 "ffdhe4096", 00:05:19.721 "ffdhe6144", 00:05:19.721 "ffdhe8192" 00:05:19.721 ] 00:05:19.721 } 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "method": "bdev_nvme_set_hotplug", 00:05:19.721 "params": { 00:05:19.721 "period_us": 100000, 00:05:19.721 "enable": false 00:05:19.721 } 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "method": "bdev_wait_for_examine" 00:05:19.721 } 00:05:19.721 ] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "scsi", 00:05:19.721 "config": null 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "scheduler", 00:05:19.721 "config": [ 00:05:19.721 { 00:05:19.721 "method": "framework_set_scheduler", 00:05:19.721 "params": { 00:05:19.721 "name": "static" 00:05:19.721 } 00:05:19.721 } 00:05:19.721 ] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "vhost_scsi", 00:05:19.721 "config": [] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "vhost_blk", 00:05:19.721 "config": [] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "ublk", 00:05:19.721 "config": [] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "nbd", 00:05:19.721 "config": [] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "nvmf", 00:05:19.721 "config": [ 00:05:19.721 { 00:05:19.721 "method": "nvmf_set_config", 00:05:19.721 "params": { 00:05:19.721 "discovery_filter": "match_any", 00:05:19.721 "admin_cmd_passthru": { 00:05:19.721 "identify_ctrlr": false 00:05:19.721 } 00:05:19.721 } 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "method": "nvmf_set_max_subsystems", 00:05:19.721 "params": { 00:05:19.721 "max_subsystems": 1024 00:05:19.721 } 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "method": "nvmf_set_crdt", 00:05:19.721 "params": { 00:05:19.721 "crdt1": 0, 00:05:19.721 "crdt2": 0, 00:05:19.721 "crdt3": 0 00:05:19.721 } 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "method": "nvmf_create_transport", 00:05:19.721 "params": { 00:05:19.721 "trtype": "TCP", 00:05:19.721 "max_queue_depth": 128, 00:05:19.721 "max_io_qpairs_per_ctrlr": 127, 00:05:19.721 "in_capsule_data_size": 4096, 00:05:19.721 "max_io_size": 131072, 00:05:19.721 "io_unit_size": 131072, 00:05:19.721 "max_aq_depth": 128, 00:05:19.721 "num_shared_buffers": 511, 00:05:19.721 "buf_cache_size": 4294967295, 00:05:19.721 "dif_insert_or_strip": false, 00:05:19.721 "zcopy": false, 00:05:19.721 "c2h_success": true, 00:05:19.721 "sock_priority": 0, 00:05:19.721 "abort_timeout_sec": 1, 00:05:19.721 "ack_timeout": 0, 00:05:19.721 "data_wr_pool_size": 0 00:05:19.721 } 00:05:19.721 } 00:05:19.721 ] 00:05:19.721 }, 00:05:19.721 { 00:05:19.721 "subsystem": "iscsi", 00:05:19.721 "config": [ 00:05:19.721 { 00:05:19.721 "method": "iscsi_set_options", 00:05:19.721 "params": { 00:05:19.721 "node_base": "iqn.2016-06.io.spdk", 00:05:19.721 "max_sessions": 128, 00:05:19.721 "max_connections_per_session": 2, 00:05:19.721 "max_queue_depth": 64, 00:05:19.721 "default_time2wait": 2, 00:05:19.721 "default_time2retain": 20, 00:05:19.721 "first_burst_length": 8192, 00:05:19.721 "immediate_data": true, 00:05:19.721 "allow_duplicated_isid": false, 00:05:19.721 "error_recovery_level": 0, 00:05:19.721 "nop_timeout": 60, 00:05:19.721 "nop_in_interval": 30, 00:05:19.721 "disable_chap": false, 00:05:19.721 "require_chap": false, 00:05:19.721 "mutual_chap": false, 00:05:19.721 "chap_group": 0, 00:05:19.721 "max_large_datain_per_connection": 64, 00:05:19.721 "max_r2t_per_connection": 4, 00:05:19.721 "pdu_pool_size": 36864, 00:05:19.721 "immediate_data_pool_size": 16384, 00:05:19.721 "data_out_pool_size": 2048 00:05:19.721 } 00:05:19.721 } 00:05:19.721 ] 00:05:19.721 } 00:05:19.721 ] 00:05:19.721 } 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2839618 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2839618 ']' 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2839618 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839618 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839618' 00:05:19.721 killing process with pid 2839618 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2839618 00:05:19.721 00:31:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2839618 00:05:20.288 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2839887 00:05:20.288 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:20.288 00:31:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2839887 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2839887 ']' 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2839887 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2839887 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2839887' 00:05:25.557 killing process with pid 2839887 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2839887 00:05:25.557 00:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2839887 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:25.557 00:05:25.557 real 0m6.473s 00:05:25.557 user 0m6.346s 00:05:25.557 sys 0m0.693s 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.557 ************************************ 00:05:25.557 END TEST skip_rpc_with_json 00:05:25.557 ************************************ 00:05:25.557 00:31:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.557 00:31:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:25.557 00:31:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.557 00:31:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.557 00:31:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.557 ************************************ 00:05:25.557 START TEST skip_rpc_with_delay 00:05:25.557 ************************************ 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:25.557 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:25.815 [2024-07-16 00:31:43.415202] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:25.815 [2024-07-16 00:31:43.415288] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.815 00:05:25.815 real 0m0.078s 00:05:25.815 user 0m0.048s 00:05:25.815 sys 0m0.029s 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.815 00:31:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 ************************************ 00:05:25.815 END TEST skip_rpc_with_delay 00:05:25.815 ************************************ 00:05:25.815 00:31:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:25.815 00:31:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:25.815 00:31:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:25.815 00:31:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:25.815 00:31:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.815 00:31:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.815 00:31:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 ************************************ 00:05:25.815 START TEST exit_on_failed_rpc_init 00:05:25.815 ************************************ 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2840985 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2840985 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2840985 ']' 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.815 00:31:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 [2024-07-16 00:31:43.560849] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:25.815 [2024-07-16 00:31:43.560900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2840985 ] 00:05:25.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.815 [2024-07-16 00:31:43.645214] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.072 [2024-07-16 00:31:43.735216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.006 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.006 [2024-07-16 00:31:44.563576] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:27.006 [2024-07-16 00:31:44.563634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841008 ] 00:05:27.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.006 [2024-07-16 00:31:44.647461] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.006 [2024-07-16 00:31:44.748500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.006 [2024-07-16 00:31:44.748586] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.006 [2024-07-16 00:31:44.748603] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.006 [2024-07-16 00:31:44.748619] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.265 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:27.265 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2840985 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2840985 ']' 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2840985 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2840985 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2840985' 00:05:27.266 killing process with pid 2840985 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2840985 00:05:27.266 00:31:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2840985 00:05:27.526 00:05:27.526 real 0m1.728s 00:05:27.526 user 0m2.119s 00:05:27.526 sys 0m0.471s 00:05:27.526 00:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.526 00:31:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 ************************************ 00:05:27.526 END TEST exit_on_failed_rpc_init 00:05:27.526 ************************************ 00:05:27.526 00:31:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.526 00:31:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.526 00:05:27.526 real 0m14.047s 00:05:27.526 user 0m13.787s 00:05:27.526 sys 0m1.738s 00:05:27.526 00:31:45 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.526 00:31:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 ************************************ 00:05:27.526 END TEST skip_rpc 00:05:27.526 ************************************ 00:05:27.526 00:31:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.526 00:31:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.526 00:31:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.526 00:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.526 00:31:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 ************************************ 00:05:27.526 START TEST rpc_client 00:05:27.526 ************************************ 00:05:27.526 00:31:45 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.785 * Looking for test storage... 00:05:27.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.785 00:31:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.785 OK 00:05:27.785 00:31:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.785 00:05:27.785 real 0m0.114s 00:05:27.785 user 0m0.051s 00:05:27.785 sys 0m0.071s 00:05:27.785 00:31:45 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.785 00:31:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.785 ************************************ 00:05:27.785 END TEST rpc_client 00:05:27.785 ************************************ 00:05:27.785 00:31:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.785 00:31:45 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.785 00:31:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.785 00:31:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.785 00:31:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.785 ************************************ 00:05:27.785 START TEST json_config 00:05:27.785 ************************************ 00:05:27.785 00:31:45 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.785 00:31:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.785 00:31:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.786 00:31:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.786 00:31:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.786 00:31:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.786 00:31:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.786 00:31:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.786 00:31:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.786 00:31:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.786 00:31:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@47 -- # : 0 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.786 00:31:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:27.786 INFO: JSON configuration test init 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:27.786 00:31:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.786 00:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.786 00:31:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:27.786 00:31:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.786 00:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.045 00:31:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.045 00:31:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.045 00:31:45 json_config -- json_config/common.sh@10 -- # shift 00:05:28.045 00:31:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.045 00:31:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.045 00:31:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.045 00:31:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.045 00:31:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.045 00:31:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2841373 00:05:28.045 00:31:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.045 Waiting for target to run... 00:05:28.045 00:31:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2841373 /var/tmp/spdk_tgt.sock 00:05:28.045 00:31:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 2841373 ']' 00:05:28.045 00:31:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.045 00:31:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.045 00:31:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.046 00:31:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.046 00:31:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.046 00:31:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.046 [2024-07-16 00:31:45.686747] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:28.046 [2024-07-16 00:31:45.686811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841373 ] 00:05:28.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.613 [2024-07-16 00:31:46.148572] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.613 [2024-07-16 00:31:46.245674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:28.872 00:31:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.872 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.872 00:31:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:28.872 00:31:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:28.872 00:31:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.160 00:31:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.160 00:31:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:32.160 00:31:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.160 00:31:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:32.419 00:31:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.419 00:31:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:32.419 00:31:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.419 00:31:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:32.419 00:31:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.419 00:31:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.678 MallocForNvmf0 00:05:32.678 00:31:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.678 00:31:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.936 MallocForNvmf1 00:05:32.936 00:31:50 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.936 00:31:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.194 [2024-07-16 00:31:50.780263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.194 00:31:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.194 00:31:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.452 00:31:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.452 00:31:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.711 00:31:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.711 00:31:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.711 00:31:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.711 00:31:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.970 [2024-07-16 00:31:51.767506] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.970 00:31:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:33.970 00:31:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.970 00:31:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.228 00:31:51 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:34.228 00:31:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.228 00:31:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.228 00:31:51 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:34.228 00:31:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.228 00:31:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.486 MallocBdevForConfigChangeCheck 00:05:34.486 00:31:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:34.486 00:31:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:34.486 00:31:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 00:31:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:34.486 00:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.764 00:31:52 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:34.764 INFO: shutting down applications... 00:05:34.764 00:31:52 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:34.764 00:31:52 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:34.764 00:31:52 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:34.764 00:31:52 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.668 Calling clear_iscsi_subsystem 00:05:36.668 Calling clear_nvmf_subsystem 00:05:36.668 Calling clear_nbd_subsystem 00:05:36.668 Calling clear_ublk_subsystem 00:05:36.668 Calling clear_vhost_blk_subsystem 00:05:36.668 Calling clear_vhost_scsi_subsystem 00:05:36.668 Calling clear_bdev_subsystem 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.668 00:31:54 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.927 00:31:54 json_config -- json_config/json_config.sh@345 -- # break 00:05:36.927 00:31:54 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.927 00:31:54 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.927 00:31:54 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.927 00:31:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.927 00:31:54 json_config -- json_config/common.sh@35 -- # [[ -n 2841373 ]] 00:05:36.927 00:31:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2841373 00:05:36.927 00:31:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.927 00:31:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.927 00:31:54 json_config -- json_config/common.sh@41 -- # kill -0 2841373 00:05:36.927 00:31:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.494 00:31:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.494 00:31:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.494 00:31:55 json_config -- json_config/common.sh@41 -- # kill -0 2841373 00:05:37.494 00:31:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.494 00:31:55 json_config -- json_config/common.sh@43 -- # break 00:05:37.494 00:31:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.494 00:31:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.494 SPDK target shutdown done 00:05:37.494 00:31:55 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:37.494 INFO: relaunching applications... 00:05:37.494 00:31:55 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.494 00:31:55 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.494 00:31:55 json_config -- json_config/common.sh@10 -- # shift 00:05:37.494 00:31:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.494 00:31:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.494 00:31:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.494 00:31:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.494 00:31:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.494 00:31:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2843334 00:05:37.494 00:31:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.494 Waiting for target to run... 00:05:37.494 00:31:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.494 00:31:55 json_config -- json_config/common.sh@25 -- # waitforlisten 2843334 /var/tmp/spdk_tgt.sock 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@829 -- # '[' -z 2843334 ']' 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.494 00:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.494 [2024-07-16 00:31:55.128308] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:37.494 [2024-07-16 00:31:55.128373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843334 ] 00:05:37.494 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.752 [2024-07-16 00:31:55.577345] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.010 [2024-07-16 00:31:55.683182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.299 [2024-07-16 00:31:58.729157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.299 [2024-07-16 00:31:58.761493] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.299 00:31:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.299 00:31:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:41.299 00:31:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.299 00:05:41.299 00:31:58 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:41.299 00:31:58 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.299 INFO: Checking if target configuration is the same... 00:05:41.299 00:31:58 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.299 00:31:58 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:41.299 00:31:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.299 + '[' 2 -ne 2 ']' 00:05:41.299 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.299 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.299 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.299 +++ basename /dev/fd/62 00:05:41.299 ++ mktemp /tmp/62.XXX 00:05:41.299 + tmp_file_1=/tmp/62.Z4F 00:05:41.299 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.299 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.299 + tmp_file_2=/tmp/spdk_tgt_config.json.MN9 00:05:41.299 + ret=0 00:05:41.299 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.558 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.558 + diff -u /tmp/62.Z4F /tmp/spdk_tgt_config.json.MN9 00:05:41.558 + echo 'INFO: JSON config files are the same' 00:05:41.558 INFO: JSON config files are the same 00:05:41.558 + rm /tmp/62.Z4F /tmp/spdk_tgt_config.json.MN9 00:05:41.558 + exit 0 00:05:41.558 00:31:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:41.558 00:31:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.558 INFO: changing configuration and checking if this can be detected... 00:05:41.558 00:31:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.558 00:31:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.817 00:31:59 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:41.817 00:31:59 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.817 00:31:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.817 + '[' 2 -ne 2 ']' 00:05:41.817 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.817 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.817 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.817 +++ basename /dev/fd/62 00:05:41.817 ++ mktemp /tmp/62.XXX 00:05:41.817 + tmp_file_1=/tmp/62.kS2 00:05:41.817 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.817 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.817 + tmp_file_2=/tmp/spdk_tgt_config.json.BFF 00:05:41.817 + ret=0 00:05:41.817 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.076 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.076 + diff -u /tmp/62.kS2 /tmp/spdk_tgt_config.json.BFF 00:05:42.076 + ret=1 00:05:42.076 + echo '=== Start of file: /tmp/62.kS2 ===' 00:05:42.076 + cat /tmp/62.kS2 00:05:42.076 + echo '=== End of file: /tmp/62.kS2 ===' 00:05:42.076 + echo '' 00:05:42.076 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BFF ===' 00:05:42.076 + cat /tmp/spdk_tgt_config.json.BFF 00:05:42.076 + echo '=== End of file: /tmp/spdk_tgt_config.json.BFF ===' 00:05:42.076 + echo '' 00:05:42.076 + rm /tmp/62.kS2 /tmp/spdk_tgt_config.json.BFF 00:05:42.076 + exit 1 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:42.076 INFO: configuration change detected. 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@317 -- # [[ -n 2843334 ]] 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:42.076 00:31:59 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.076 00:31:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.335 00:31:59 json_config -- json_config/json_config.sh@323 -- # killprocess 2843334 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 2843334 ']' 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@952 -- # kill -0 2843334 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@953 -- # uname 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2843334 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2843334' 00:05:42.335 killing process with pid 2843334 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@967 -- # kill 2843334 00:05:42.335 00:31:59 json_config -- common/autotest_common.sh@972 -- # wait 2843334 00:05:44.240 00:32:01 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.240 00:32:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:44.240 00:32:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.240 00:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.240 00:32:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:44.240 00:32:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:44.240 INFO: Success 00:05:44.240 00:05:44.240 real 0m16.076s 00:05:44.240 user 0m17.820s 00:05:44.240 sys 0m2.188s 00:05:44.240 00:32:01 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.240 00:32:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.240 ************************************ 00:05:44.240 END TEST json_config 00:05:44.240 ************************************ 00:05:44.240 00:32:01 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.240 00:32:01 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:44.240 00:32:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.240 00:32:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.240 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:05:44.240 ************************************ 00:05:44.240 START TEST json_config_extra_key 00:05:44.240 ************************************ 00:05:44.240 00:32:01 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:44.240 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.240 00:32:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.240 00:32:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.240 00:32:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.240 00:32:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.240 00:32:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.241 00:32:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.241 00:32:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.241 00:32:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:44.241 00:32:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.241 00:32:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:44.241 INFO: launching applications... 00:05:44.241 00:32:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2844507 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.241 Waiting for target to run... 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2844507 /var/tmp/spdk_tgt.sock 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2844507 ']' 00:05:44.241 00:32:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.241 00:32:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.241 [2024-07-16 00:32:01.823470] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:44.241 [2024-07-16 00:32:01.823534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844507 ] 00:05:44.241 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.500 [2024-07-16 00:32:02.288925] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.757 [2024-07-16 00:32:02.393370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.016 00:32:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.016 00:32:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:45.016 00:05:45.016 00:32:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:45.016 INFO: shutting down applications... 00:05:45.016 00:32:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2844507 ]] 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2844507 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844507 00:05:45.016 00:32:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2844507 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.582 00:32:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.582 SPDK target shutdown done 00:05:45.582 00:32:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.582 Success 00:05:45.582 00:05:45.582 real 0m1.600s 00:05:45.582 user 0m1.365s 00:05:45.582 sys 0m0.563s 00:05:45.582 00:32:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.582 00:32:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.582 ************************************ 00:05:45.582 END TEST json_config_extra_key 00:05:45.583 ************************************ 00:05:45.583 00:32:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.583 00:32:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.583 00:32:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.583 00:32:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.583 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.583 ************************************ 00:05:45.583 START TEST alias_rpc 00:05:45.583 ************************************ 00:05:45.583 00:32:03 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.840 * Looking for test storage... 00:05:45.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.840 00:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.840 00:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2844934 00:05:45.840 00:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2844934 00:05:45.840 00:32:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2844934 ']' 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.840 00:32:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.840 [2024-07-16 00:32:03.494459] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:45.840 [2024-07-16 00:32:03.494522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844934 ] 00:05:45.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.840 [2024-07-16 00:32:03.576670] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.840 [2024-07-16 00:32:03.665969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.773 00:32:04 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.773 00:32:04 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.773 00:32:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:47.032 00:32:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2844934 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2844934 ']' 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2844934 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2844934 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2844934' 00:05:47.032 killing process with pid 2844934 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@967 -- # kill 2844934 00:05:47.032 00:32:04 alias_rpc -- common/autotest_common.sh@972 -- # wait 2844934 00:05:47.290 00:05:47.290 real 0m1.740s 00:05:47.290 user 0m2.029s 00:05:47.290 sys 0m0.471s 00:05:47.290 00:32:05 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.290 00:32:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.290 ************************************ 00:05:47.290 END TEST alias_rpc 00:05:47.290 ************************************ 00:05:47.290 00:32:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.290 00:32:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:47.290 00:32:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.290 00:32:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.290 00:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.290 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.549 ************************************ 00:05:47.549 START TEST spdkcli_tcp 00:05:47.549 ************************************ 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.549 * Looking for test storage... 00:05:47.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2845388 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2845388 00:05:47.549 00:32:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2845388 ']' 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.549 00:32:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.549 [2024-07-16 00:32:05.306319] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:47.549 [2024-07-16 00:32:05.306387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845388 ] 00:05:47.549 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.807 [2024-07-16 00:32:05.391326] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.808 [2024-07-16 00:32:05.484477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.808 [2024-07-16 00:32:05.484482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.744 00:32:06 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.744 00:32:06 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:48.744 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2845474 00:05:48.744 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.744 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.744 [ 00:05:48.744 "bdev_malloc_delete", 00:05:48.744 "bdev_malloc_create", 00:05:48.744 "bdev_null_resize", 00:05:48.744 "bdev_null_delete", 00:05:48.744 "bdev_null_create", 00:05:48.744 "bdev_nvme_cuse_unregister", 00:05:48.744 "bdev_nvme_cuse_register", 00:05:48.744 "bdev_opal_new_user", 00:05:48.744 "bdev_opal_set_lock_state", 00:05:48.744 "bdev_opal_delete", 00:05:48.744 "bdev_opal_get_info", 00:05:48.744 "bdev_opal_create", 00:05:48.744 "bdev_nvme_opal_revert", 00:05:48.744 "bdev_nvme_opal_init", 00:05:48.744 "bdev_nvme_send_cmd", 00:05:48.744 "bdev_nvme_get_path_iostat", 00:05:48.744 "bdev_nvme_get_mdns_discovery_info", 00:05:48.744 "bdev_nvme_stop_mdns_discovery", 00:05:48.744 "bdev_nvme_start_mdns_discovery", 00:05:48.744 "bdev_nvme_set_multipath_policy", 00:05:48.745 "bdev_nvme_set_preferred_path", 00:05:48.745 "bdev_nvme_get_io_paths", 00:05:48.745 "bdev_nvme_remove_error_injection", 00:05:48.745 "bdev_nvme_add_error_injection", 00:05:48.745 "bdev_nvme_get_discovery_info", 00:05:48.745 "bdev_nvme_stop_discovery", 00:05:48.745 "bdev_nvme_start_discovery", 00:05:48.745 "bdev_nvme_get_controller_health_info", 00:05:48.745 "bdev_nvme_disable_controller", 00:05:48.745 "bdev_nvme_enable_controller", 00:05:48.745 "bdev_nvme_reset_controller", 00:05:48.745 "bdev_nvme_get_transport_statistics", 00:05:48.745 "bdev_nvme_apply_firmware", 00:05:48.745 "bdev_nvme_detach_controller", 00:05:48.745 "bdev_nvme_get_controllers", 00:05:48.745 "bdev_nvme_attach_controller", 00:05:48.745 "bdev_nvme_set_hotplug", 00:05:48.745 "bdev_nvme_set_options", 00:05:48.745 "bdev_passthru_delete", 00:05:48.745 "bdev_passthru_create", 00:05:48.745 "bdev_lvol_set_parent_bdev", 00:05:48.745 "bdev_lvol_set_parent", 00:05:48.745 "bdev_lvol_check_shallow_copy", 00:05:48.745 "bdev_lvol_start_shallow_copy", 00:05:48.745 "bdev_lvol_grow_lvstore", 00:05:48.745 "bdev_lvol_get_lvols", 00:05:48.745 "bdev_lvol_get_lvstores", 00:05:48.745 "bdev_lvol_delete", 00:05:48.745 "bdev_lvol_set_read_only", 00:05:48.745 "bdev_lvol_resize", 00:05:48.745 "bdev_lvol_decouple_parent", 00:05:48.745 "bdev_lvol_inflate", 00:05:48.745 "bdev_lvol_rename", 00:05:48.745 "bdev_lvol_clone_bdev", 00:05:48.745 "bdev_lvol_clone", 00:05:48.745 "bdev_lvol_snapshot", 00:05:48.745 "bdev_lvol_create", 00:05:48.745 "bdev_lvol_delete_lvstore", 00:05:48.745 "bdev_lvol_rename_lvstore", 00:05:48.745 "bdev_lvol_create_lvstore", 00:05:48.745 "bdev_raid_set_options", 00:05:48.745 "bdev_raid_remove_base_bdev", 00:05:48.745 "bdev_raid_add_base_bdev", 00:05:48.745 "bdev_raid_delete", 00:05:48.745 "bdev_raid_create", 00:05:48.745 "bdev_raid_get_bdevs", 00:05:48.745 "bdev_error_inject_error", 00:05:48.745 "bdev_error_delete", 00:05:48.745 "bdev_error_create", 00:05:48.745 "bdev_split_delete", 00:05:48.745 "bdev_split_create", 00:05:48.745 "bdev_delay_delete", 00:05:48.745 "bdev_delay_create", 00:05:48.745 "bdev_delay_update_latency", 00:05:48.745 "bdev_zone_block_delete", 00:05:48.745 "bdev_zone_block_create", 00:05:48.745 "blobfs_create", 00:05:48.745 "blobfs_detect", 00:05:48.745 "blobfs_set_cache_size", 00:05:48.745 "bdev_aio_delete", 00:05:48.745 "bdev_aio_rescan", 00:05:48.745 "bdev_aio_create", 00:05:48.745 "bdev_ftl_set_property", 00:05:48.745 "bdev_ftl_get_properties", 00:05:48.745 "bdev_ftl_get_stats", 00:05:48.745 "bdev_ftl_unmap", 00:05:48.745 "bdev_ftl_unload", 00:05:48.745 "bdev_ftl_delete", 00:05:48.745 "bdev_ftl_load", 00:05:48.745 "bdev_ftl_create", 00:05:48.745 "bdev_virtio_attach_controller", 00:05:48.745 "bdev_virtio_scsi_get_devices", 00:05:48.745 "bdev_virtio_detach_controller", 00:05:48.745 "bdev_virtio_blk_set_hotplug", 00:05:48.745 "bdev_iscsi_delete", 00:05:48.745 "bdev_iscsi_create", 00:05:48.745 "bdev_iscsi_set_options", 00:05:48.745 "accel_error_inject_error", 00:05:48.745 "ioat_scan_accel_module", 00:05:48.745 "dsa_scan_accel_module", 00:05:48.745 "iaa_scan_accel_module", 00:05:48.745 "vfu_virtio_create_scsi_endpoint", 00:05:48.745 "vfu_virtio_scsi_remove_target", 00:05:48.745 "vfu_virtio_scsi_add_target", 00:05:48.745 "vfu_virtio_create_blk_endpoint", 00:05:48.745 "vfu_virtio_delete_endpoint", 00:05:48.745 "keyring_file_remove_key", 00:05:48.745 "keyring_file_add_key", 00:05:48.745 "keyring_linux_set_options", 00:05:48.745 "iscsi_get_histogram", 00:05:48.745 "iscsi_enable_histogram", 00:05:48.745 "iscsi_set_options", 00:05:48.745 "iscsi_get_auth_groups", 00:05:48.745 "iscsi_auth_group_remove_secret", 00:05:48.745 "iscsi_auth_group_add_secret", 00:05:48.745 "iscsi_delete_auth_group", 00:05:48.745 "iscsi_create_auth_group", 00:05:48.745 "iscsi_set_discovery_auth", 00:05:48.745 "iscsi_get_options", 00:05:48.745 "iscsi_target_node_request_logout", 00:05:48.745 "iscsi_target_node_set_redirect", 00:05:48.745 "iscsi_target_node_set_auth", 00:05:48.745 "iscsi_target_node_add_lun", 00:05:48.745 "iscsi_get_stats", 00:05:48.745 "iscsi_get_connections", 00:05:48.745 "iscsi_portal_group_set_auth", 00:05:48.745 "iscsi_start_portal_group", 00:05:48.745 "iscsi_delete_portal_group", 00:05:48.745 "iscsi_create_portal_group", 00:05:48.745 "iscsi_get_portal_groups", 00:05:48.745 "iscsi_delete_target_node", 00:05:48.745 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.745 "iscsi_target_node_add_pg_ig_maps", 00:05:48.745 "iscsi_create_target_node", 00:05:48.745 "iscsi_get_target_nodes", 00:05:48.745 "iscsi_delete_initiator_group", 00:05:48.745 "iscsi_initiator_group_remove_initiators", 00:05:48.745 "iscsi_initiator_group_add_initiators", 00:05:48.745 "iscsi_create_initiator_group", 00:05:48.745 "iscsi_get_initiator_groups", 00:05:48.745 "nvmf_set_crdt", 00:05:48.745 "nvmf_set_config", 00:05:48.745 "nvmf_set_max_subsystems", 00:05:48.745 "nvmf_stop_mdns_prr", 00:05:48.745 "nvmf_publish_mdns_prr", 00:05:48.745 "nvmf_subsystem_get_listeners", 00:05:48.745 "nvmf_subsystem_get_qpairs", 00:05:48.745 "nvmf_subsystem_get_controllers", 00:05:48.745 "nvmf_get_stats", 00:05:48.745 "nvmf_get_transports", 00:05:48.745 "nvmf_create_transport", 00:05:48.745 "nvmf_get_targets", 00:05:48.745 "nvmf_delete_target", 00:05:48.745 "nvmf_create_target", 00:05:48.745 "nvmf_subsystem_allow_any_host", 00:05:48.745 "nvmf_subsystem_remove_host", 00:05:48.745 "nvmf_subsystem_add_host", 00:05:48.745 "nvmf_ns_remove_host", 00:05:48.745 "nvmf_ns_add_host", 00:05:48.745 "nvmf_subsystem_remove_ns", 00:05:48.745 "nvmf_subsystem_add_ns", 00:05:48.745 "nvmf_subsystem_listener_set_ana_state", 00:05:48.745 "nvmf_discovery_get_referrals", 00:05:48.745 "nvmf_discovery_remove_referral", 00:05:48.745 "nvmf_discovery_add_referral", 00:05:48.745 "nvmf_subsystem_remove_listener", 00:05:48.745 "nvmf_subsystem_add_listener", 00:05:48.745 "nvmf_delete_subsystem", 00:05:48.745 "nvmf_create_subsystem", 00:05:48.745 "nvmf_get_subsystems", 00:05:48.745 "env_dpdk_get_mem_stats", 00:05:48.745 "nbd_get_disks", 00:05:48.745 "nbd_stop_disk", 00:05:48.745 "nbd_start_disk", 00:05:48.745 "ublk_recover_disk", 00:05:48.745 "ublk_get_disks", 00:05:48.745 "ublk_stop_disk", 00:05:48.745 "ublk_start_disk", 00:05:48.745 "ublk_destroy_target", 00:05:48.745 "ublk_create_target", 00:05:48.745 "virtio_blk_create_transport", 00:05:48.745 "virtio_blk_get_transports", 00:05:48.745 "vhost_controller_set_coalescing", 00:05:48.745 "vhost_get_controllers", 00:05:48.745 "vhost_delete_controller", 00:05:48.745 "vhost_create_blk_controller", 00:05:48.745 "vhost_scsi_controller_remove_target", 00:05:48.745 "vhost_scsi_controller_add_target", 00:05:48.745 "vhost_start_scsi_controller", 00:05:48.745 "vhost_create_scsi_controller", 00:05:48.745 "thread_set_cpumask", 00:05:48.745 "framework_get_governor", 00:05:48.745 "framework_get_scheduler", 00:05:48.745 "framework_set_scheduler", 00:05:48.745 "framework_get_reactors", 00:05:48.745 "thread_get_io_channels", 00:05:48.745 "thread_get_pollers", 00:05:48.745 "thread_get_stats", 00:05:48.745 "framework_monitor_context_switch", 00:05:48.745 "spdk_kill_instance", 00:05:48.745 "log_enable_timestamps", 00:05:48.745 "log_get_flags", 00:05:48.745 "log_clear_flag", 00:05:48.745 "log_set_flag", 00:05:48.745 "log_get_level", 00:05:48.745 "log_set_level", 00:05:48.745 "log_get_print_level", 00:05:48.745 "log_set_print_level", 00:05:48.745 "framework_enable_cpumask_locks", 00:05:48.745 "framework_disable_cpumask_locks", 00:05:48.745 "framework_wait_init", 00:05:48.745 "framework_start_init", 00:05:48.745 "scsi_get_devices", 00:05:48.745 "bdev_get_histogram", 00:05:48.745 "bdev_enable_histogram", 00:05:48.745 "bdev_set_qos_limit", 00:05:48.745 "bdev_set_qd_sampling_period", 00:05:48.745 "bdev_get_bdevs", 00:05:48.745 "bdev_reset_iostat", 00:05:48.745 "bdev_get_iostat", 00:05:48.745 "bdev_examine", 00:05:48.745 "bdev_wait_for_examine", 00:05:48.745 "bdev_set_options", 00:05:48.745 "notify_get_notifications", 00:05:48.745 "notify_get_types", 00:05:48.745 "accel_get_stats", 00:05:48.745 "accel_set_options", 00:05:48.745 "accel_set_driver", 00:05:48.745 "accel_crypto_key_destroy", 00:05:48.745 "accel_crypto_keys_get", 00:05:48.745 "accel_crypto_key_create", 00:05:48.745 "accel_assign_opc", 00:05:48.745 "accel_get_module_info", 00:05:48.745 "accel_get_opc_assignments", 00:05:48.745 "vmd_rescan", 00:05:48.745 "vmd_remove_device", 00:05:48.745 "vmd_enable", 00:05:48.745 "sock_get_default_impl", 00:05:48.745 "sock_set_default_impl", 00:05:48.745 "sock_impl_set_options", 00:05:48.745 "sock_impl_get_options", 00:05:48.745 "iobuf_get_stats", 00:05:48.745 "iobuf_set_options", 00:05:48.745 "keyring_get_keys", 00:05:48.745 "framework_get_pci_devices", 00:05:48.745 "framework_get_config", 00:05:48.745 "framework_get_subsystems", 00:05:48.745 "vfu_tgt_set_base_path", 00:05:48.745 "trace_get_info", 00:05:48.745 "trace_get_tpoint_group_mask", 00:05:48.745 "trace_disable_tpoint_group", 00:05:48.745 "trace_enable_tpoint_group", 00:05:48.745 "trace_clear_tpoint_mask", 00:05:48.745 "trace_set_tpoint_mask", 00:05:48.745 "spdk_get_version", 00:05:48.745 "rpc_get_methods" 00:05:48.745 ] 00:05:48.745 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.745 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:48.745 00:32:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2845388 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2845388 ']' 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2845388 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:48.745 00:32:06 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.746 00:32:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2845388 00:05:49.004 00:32:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.004 00:32:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.004 00:32:06 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2845388' 00:05:49.004 killing process with pid 2845388 00:05:49.004 00:32:06 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2845388 00:05:49.004 00:32:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2845388 00:05:49.262 00:05:49.262 real 0m1.786s 00:05:49.262 user 0m3.477s 00:05:49.262 sys 0m0.493s 00:05:49.262 00:32:06 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.262 00:32:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.262 ************************************ 00:05:49.262 END TEST spdkcli_tcp 00:05:49.262 ************************************ 00:05:49.262 00:32:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.262 00:32:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.262 00:32:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.262 00:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.262 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:05:49.262 ************************************ 00:05:49.262 START TEST dpdk_mem_utility 00:05:49.262 ************************************ 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.262 * Looking for test storage... 00:05:49.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:49.262 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.262 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2845725 00:05:49.262 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2845725 00:05:49.262 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2845725 ']' 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.262 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.263 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.522 [2024-07-16 00:32:07.144839] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:49.522 [2024-07-16 00:32:07.144902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845725 ] 00:05:49.522 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.522 [2024-07-16 00:32:07.227039] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.522 [2024-07-16 00:32:07.318187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.781 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.781 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:49.781 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.781 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.781 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.781 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.781 { 00:05:49.781 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.781 } 00:05:49.781 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.781 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.781 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.781 1 heaps totaling size 814.000000 MiB 00:05:49.781 size: 814.000000 MiB heap id: 0 00:05:49.781 end heaps---------- 00:05:49.781 8 mempools totaling size 598.116089 MiB 00:05:49.781 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.781 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.781 size: 84.521057 MiB name: bdev_io_2845725 00:05:49.781 size: 51.011292 MiB name: evtpool_2845725 00:05:49.781 size: 50.003479 MiB name: msgpool_2845725 00:05:49.781 size: 21.763794 MiB name: PDU_Pool 00:05:49.781 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.781 size: 0.026123 MiB name: Session_Pool 00:05:49.781 end mempools------- 00:05:49.781 6 memzones totaling size 4.142822 MiB 00:05:49.781 size: 1.000366 MiB name: RG_ring_0_2845725 00:05:49.781 size: 1.000366 MiB name: RG_ring_1_2845725 00:05:49.781 size: 1.000366 MiB name: RG_ring_4_2845725 00:05:49.781 size: 1.000366 MiB name: RG_ring_5_2845725 00:05:49.781 size: 0.125366 MiB name: RG_ring_2_2845725 00:05:49.781 size: 0.015991 MiB name: RG_ring_3_2845725 00:05:49.781 end memzones------- 00:05:49.781 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.041 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:50.041 list of free elements. size: 12.519348 MiB 00:05:50.041 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:50.041 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:50.041 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:50.041 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:50.041 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:50.041 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:50.041 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:50.041 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:50.041 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:50.041 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:50.041 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:50.041 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:50.041 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:50.041 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:50.041 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:50.041 list of standard malloc elements. size: 199.218079 MiB 00:05:50.041 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:50.041 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:50.041 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:50.041 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:50.041 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:50.041 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:50.041 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:50.041 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:50.041 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:50.041 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:50.041 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:50.041 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:50.041 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:50.041 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:50.042 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:50.042 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:50.042 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:50.042 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:50.042 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:50.042 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:50.042 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:50.042 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:50.042 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:50.042 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:50.042 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:50.042 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:50.042 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:50.042 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:50.042 list of memzone associated elements. size: 602.262573 MiB 00:05:50.042 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:50.042 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.042 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:50.042 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.042 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:50.042 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2845725_0 00:05:50.042 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:50.042 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2845725_0 00:05:50.042 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:50.042 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2845725_0 00:05:50.042 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:50.042 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.042 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:50.042 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.042 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:50.042 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2845725 00:05:50.042 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:50.042 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2845725 00:05:50.042 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:50.042 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2845725 00:05:50.042 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.042 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.042 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:50.042 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.042 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:50.042 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.042 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2845725 00:05:50.042 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2845725 00:05:50.042 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2845725 00:05:50.042 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:50.042 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2845725 00:05:50.042 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2845725 00:05:50.042 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.042 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:50.042 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.042 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:50.042 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.042 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:50.042 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2845725 00:05:50.042 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:50.042 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.042 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:50.042 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.042 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:50.042 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2845725 00:05:50.042 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:50.042 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.042 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:50.042 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2845725 00:05:50.042 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:50.042 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2845725 00:05:50.042 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:50.042 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.042 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.042 00:32:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2845725 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2845725 ']' 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2845725 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2845725 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2845725' 00:05:50.042 killing process with pid 2845725 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2845725 00:05:50.042 00:32:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2845725 00:05:50.301 00:05:50.301 real 0m1.051s 00:05:50.301 user 0m1.105s 00:05:50.301 sys 0m0.413s 00:05:50.301 00:32:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.301 00:32:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.301 ************************************ 00:05:50.301 END TEST dpdk_mem_utility 00:05:50.301 ************************************ 00:05:50.301 00:32:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.301 00:32:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:50.301 00:32:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.301 00:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.301 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.301 ************************************ 00:05:50.301 START TEST event 00:05:50.301 ************************************ 00:05:50.301 00:32:08 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:50.560 * Looking for test storage... 00:05:50.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.560 00:32:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:50.560 00:32:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.560 00:32:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.560 00:32:08 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:50.560 00:32:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.560 00:32:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.560 ************************************ 00:05:50.560 START TEST event_perf 00:05:50.560 ************************************ 00:05:50.560 00:32:08 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.560 Running I/O for 1 seconds...[2024-07-16 00:32:08.267696] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:50.560 [2024-07-16 00:32:08.267769] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846041 ] 00:05:50.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.560 [2024-07-16 00:32:08.342840] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.819 [2024-07-16 00:32:08.438711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.819 [2024-07-16 00:32:08.438751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.819 [2024-07-16 00:32:08.438863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.819 [2024-07-16 00:32:08.438864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.756 Running I/O for 1 seconds... 00:05:51.756 lcore 0: 100775 00:05:51.756 lcore 1: 100778 00:05:51.756 lcore 2: 100781 00:05:51.756 lcore 3: 100779 00:05:51.756 done. 00:05:51.756 00:05:51.756 real 0m1.272s 00:05:51.756 user 0m4.172s 00:05:51.756 sys 0m0.089s 00:05:51.756 00:32:09 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.756 00:32:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.756 ************************************ 00:05:51.756 END TEST event_perf 00:05:51.756 ************************************ 00:05:51.756 00:32:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:51.756 00:32:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:51.756 00:32:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:51.756 00:32:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.756 00:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.756 ************************************ 00:05:51.756 START TEST event_reactor 00:05:51.756 ************************************ 00:05:51.756 00:32:09 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:52.015 [2024-07-16 00:32:09.611294] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:52.015 [2024-07-16 00:32:09.611353] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846325 ] 00:05:52.015 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.015 [2024-07-16 00:32:09.695948] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.015 [2024-07-16 00:32:09.783879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.031 test_start 00:05:53.031 oneshot 00:05:53.031 tick 100 00:05:53.031 tick 100 00:05:53.031 tick 250 00:05:53.031 tick 100 00:05:53.031 tick 100 00:05:53.031 tick 250 00:05:53.031 tick 100 00:05:53.031 tick 500 00:05:53.031 tick 100 00:05:53.031 tick 100 00:05:53.031 tick 250 00:05:53.031 tick 100 00:05:53.031 tick 100 00:05:53.031 test_end 00:05:53.031 00:05:53.031 real 0m1.275s 00:05:53.031 user 0m1.174s 00:05:53.031 sys 0m0.095s 00:05:53.031 00:32:10 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.031 00:32:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.031 ************************************ 00:05:53.031 END TEST event_reactor 00:05:53.031 ************************************ 00:05:53.290 00:32:10 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.290 00:32:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.290 00:32:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.290 00:32:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.290 00:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.290 ************************************ 00:05:53.290 START TEST event_reactor_perf 00:05:53.290 ************************************ 00:05:53.290 00:32:10 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.290 [2024-07-16 00:32:10.958859] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:53.290 [2024-07-16 00:32:10.958938] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846608 ] 00:05:53.290 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.290 [2024-07-16 00:32:11.043236] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.549 [2024-07-16 00:32:11.131554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.485 test_start 00:05:54.485 test_end 00:05:54.485 Performance: 313421 events per second 00:05:54.485 00:05:54.485 real 0m1.275s 00:05:54.485 user 0m1.186s 00:05:54.485 sys 0m0.084s 00:05:54.485 00:32:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.485 00:32:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.485 ************************************ 00:05:54.485 END TEST event_reactor_perf 00:05:54.485 ************************************ 00:05:54.485 00:32:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.485 00:32:12 event -- event/event.sh@49 -- # uname -s 00:05:54.485 00:32:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.485 00:32:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.485 00:32:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.485 00:32:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.485 00:32:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.485 ************************************ 00:05:54.485 START TEST event_scheduler 00:05:54.485 ************************************ 00:05:54.485 00:32:12 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.744 * Looking for test storage... 00:05:54.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:54.744 00:32:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.744 00:32:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2846927 00:05:54.744 00:32:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.744 00:32:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.744 00:32:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2846927 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2846927 ']' 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.744 00:32:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.744 [2024-07-16 00:32:12.427016] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:05:54.744 [2024-07-16 00:32:12.427071] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846927 ] 00:05:54.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.744 [2024-07-16 00:32:12.541769] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.003 [2024-07-16 00:32:12.698350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.003 [2024-07-16 00:32:12.698389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.003 [2024-07-16 00:32:12.698513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.003 [2024-07-16 00:32:12.698504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:55.570 00:32:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.570 [2024-07-16 00:32:13.389851] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:55.570 [2024-07-16 00:32:13.389896] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.570 [2024-07-16 00:32:13.389921] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.570 [2024-07-16 00:32:13.389937] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.570 [2024-07-16 00:32:13.389952] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.570 00:32:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.570 00:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.828 [2024-07-16 00:32:13.500764] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.829 00:32:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.829 00:32:13 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.829 00:32:13 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 ************************************ 00:05:55.829 START TEST scheduler_create_thread 00:05:55.829 ************************************ 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 2 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 3 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 4 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 5 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 6 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 7 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 8 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 9 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 10 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.829 00:32:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.767 00:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.767 00:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.767 00:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.767 00:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.145 00:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.145 00:32:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.145 00:32:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.145 00:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.145 00:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 00:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.082 00:05:59.082 real 0m3.382s 00:05:59.082 user 0m0.026s 00:05:59.082 sys 0m0.004s 00:05:59.082 00:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.082 00:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 ************************************ 00:05:59.082 END TEST scheduler_create_thread 00:05:59.082 ************************************ 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:59.341 00:32:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.341 00:32:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2846927 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2846927 ']' 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2846927 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.341 00:32:16 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2846927 00:05:59.341 00:32:17 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:59.341 00:32:17 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:59.341 00:32:17 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2846927' 00:05:59.341 killing process with pid 2846927 00:05:59.341 00:32:17 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2846927 00:05:59.341 00:32:17 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2846927 00:05:59.599 [2024-07-16 00:32:17.299615] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.858 00:05:59.858 real 0m5.348s 00:05:59.858 user 0m10.863s 00:05:59.858 sys 0m0.465s 00:05:59.858 00:32:17 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.858 00:32:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.858 ************************************ 00:05:59.858 END TEST event_scheduler 00:05:59.858 ************************************ 00:05:59.858 00:32:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.858 00:32:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.858 00:32:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.858 00:32:17 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.858 00:32:17 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.858 00:32:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 ************************************ 00:06:00.117 START TEST app_repeat 00:06:00.117 ************************************ 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2847784 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2847784' 00:06:00.117 Process app_repeat pid: 2847784 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.117 spdk_app_start Round 0 00:06:00.117 00:32:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847784 /var/tmp/spdk-nbd.sock 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2847784 ']' 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.117 00:32:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.117 [2024-07-16 00:32:17.751573] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:00.117 [2024-07-16 00:32:17.751630] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847784 ] 00:06:00.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.117 [2024-07-16 00:32:17.834896] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.117 [2024-07-16 00:32:17.925005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.117 [2024-07-16 00:32:17.925010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.376 00:32:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.376 00:32:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.376 00:32:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.635 Malloc0 00:06:00.635 00:32:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.894 Malloc1 00:06:00.894 00:32:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.894 00:32:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.152 /dev/nbd0 00:06:01.152 00:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.152 00:32:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.152 1+0 records in 00:06:01.152 1+0 records out 00:06:01.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189892 s, 21.6 MB/s 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.152 00:32:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.152 00:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.152 00:32:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.152 00:32:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.410 /dev/nbd1 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.410 1+0 records in 00:06:01.410 1+0 records out 00:06:01.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231774 s, 17.7 MB/s 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.410 00:32:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.410 00:32:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.668 { 00:06:01.668 "nbd_device": "/dev/nbd0", 00:06:01.668 "bdev_name": "Malloc0" 00:06:01.668 }, 00:06:01.668 { 00:06:01.668 "nbd_device": "/dev/nbd1", 00:06:01.668 "bdev_name": "Malloc1" 00:06:01.668 } 00:06:01.668 ]' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.668 { 00:06:01.668 "nbd_device": "/dev/nbd0", 00:06:01.668 "bdev_name": "Malloc0" 00:06:01.668 }, 00:06:01.668 { 00:06:01.668 "nbd_device": "/dev/nbd1", 00:06:01.668 "bdev_name": "Malloc1" 00:06:01.668 } 00:06:01.668 ]' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.668 /dev/nbd1' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.668 /dev/nbd1' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.668 00:32:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.668 256+0 records in 00:06:01.668 256+0 records out 00:06:01.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103047 s, 102 MB/s 00:06:01.669 00:32:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.669 00:32:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.669 256+0 records in 00:06:01.669 256+0 records out 00:06:01.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200465 s, 52.3 MB/s 00:06:01.669 00:32:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.669 00:32:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.927 256+0 records in 00:06:01.927 256+0 records out 00:06:01.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211245 s, 49.6 MB/s 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.927 00:32:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.185 00:32:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.442 00:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.700 00:32:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.700 00:32:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.960 00:32:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.960 [2024-07-16 00:32:20.771556] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.218 [2024-07-16 00:32:20.853587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.218 [2024-07-16 00:32:20.853592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.218 [2024-07-16 00:32:20.899184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.218 [2024-07-16 00:32:20.899232] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.746 00:32:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.747 00:32:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.747 spdk_app_start Round 1 00:06:05.747 00:32:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847784 /var/tmp/spdk-nbd.sock 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2847784 ']' 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.747 00:32:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.006 00:32:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.006 00:32:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.006 00:32:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.265 Malloc0 00:06:06.265 00:32:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.525 Malloc1 00:06:06.525 00:32:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.525 /dev/nbd0 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.525 1+0 records in 00:06:06.525 1+0 records out 00:06:06.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027751 s, 14.8 MB/s 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.525 00:32:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.525 00:32:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.785 /dev/nbd1 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.785 1+0 records in 00:06:06.785 1+0 records out 00:06:06.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226719 s, 18.1 MB/s 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.785 00:32:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.785 00:32:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.044 00:32:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.044 { 00:06:07.044 "nbd_device": "/dev/nbd0", 00:06:07.044 "bdev_name": "Malloc0" 00:06:07.044 }, 00:06:07.044 { 00:06:07.044 "nbd_device": "/dev/nbd1", 00:06:07.044 "bdev_name": "Malloc1" 00:06:07.044 } 00:06:07.044 ]' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.304 { 00:06:07.304 "nbd_device": "/dev/nbd0", 00:06:07.304 "bdev_name": "Malloc0" 00:06:07.304 }, 00:06:07.304 { 00:06:07.304 "nbd_device": "/dev/nbd1", 00:06:07.304 "bdev_name": "Malloc1" 00:06:07.304 } 00:06:07.304 ]' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.304 /dev/nbd1' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.304 /dev/nbd1' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.304 256+0 records in 00:06:07.304 256+0 records out 00:06:07.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101375 s, 103 MB/s 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.304 256+0 records in 00:06:07.304 256+0 records out 00:06:07.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202244 s, 51.8 MB/s 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.304 256+0 records in 00:06:07.304 256+0 records out 00:06:07.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214968 s, 48.8 MB/s 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.304 00:32:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.304 00:32:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.567 00:32:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.825 00:32:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.084 00:32:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.084 00:32:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.343 00:32:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.601 [2024-07-16 00:32:26.348475] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.601 [2024-07-16 00:32:26.429656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.601 [2024-07-16 00:32:26.429660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.860 [2024-07-16 00:32:26.476338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.860 [2024-07-16 00:32:26.476387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.395 00:32:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.395 00:32:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.395 spdk_app_start Round 2 00:06:11.395 00:32:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2847784 /var/tmp/spdk-nbd.sock 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2847784 ']' 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.395 00:32:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.654 00:32:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.654 00:32:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:11.654 00:32:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.913 Malloc0 00:06:11.913 00:32:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.172 Malloc1 00:06:12.172 00:32:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.172 00:32:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.430 /dev/nbd0 00:06:12.430 00:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.430 00:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.430 1+0 records in 00:06:12.430 1+0 records out 00:06:12.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192362 s, 21.3 MB/s 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.430 00:32:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.430 00:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.430 00:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.430 00:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.689 /dev/nbd1 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.689 1+0 records in 00:06:12.689 1+0 records out 00:06:12.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177471 s, 23.1 MB/s 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.689 00:32:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.689 00:32:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.948 { 00:06:12.948 "nbd_device": "/dev/nbd0", 00:06:12.948 "bdev_name": "Malloc0" 00:06:12.948 }, 00:06:12.948 { 00:06:12.948 "nbd_device": "/dev/nbd1", 00:06:12.948 "bdev_name": "Malloc1" 00:06:12.948 } 00:06:12.948 ]' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.948 { 00:06:12.948 "nbd_device": "/dev/nbd0", 00:06:12.948 "bdev_name": "Malloc0" 00:06:12.948 }, 00:06:12.948 { 00:06:12.948 "nbd_device": "/dev/nbd1", 00:06:12.948 "bdev_name": "Malloc1" 00:06:12.948 } 00:06:12.948 ]' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.948 /dev/nbd1' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.948 /dev/nbd1' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.948 256+0 records in 00:06:12.948 256+0 records out 00:06:12.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103477 s, 101 MB/s 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.948 256+0 records in 00:06:12.948 256+0 records out 00:06:12.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196914 s, 53.3 MB/s 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.948 256+0 records in 00:06:12.948 256+0 records out 00:06:12.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212919 s, 49.2 MB/s 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.948 00:32:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.207 00:32:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.466 00:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.724 00:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.983 00:32:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.983 00:32:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.241 00:32:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.500 [2024-07-16 00:32:32.108484] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.500 [2024-07-16 00:32:32.190108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.500 [2024-07-16 00:32:32.190113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.500 [2024-07-16 00:32:32.236814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.500 [2024-07-16 00:32:32.236861] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.784 00:32:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2847784 /var/tmp/spdk-nbd.sock 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2847784 ']' 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.784 00:32:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:17.784 00:32:35 event.app_repeat -- event/event.sh@39 -- # killprocess 2847784 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2847784 ']' 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2847784 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2847784 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2847784' 00:06:17.784 killing process with pid 2847784 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2847784 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2847784 00:06:17.784 spdk_app_start is called in Round 0. 00:06:17.784 Shutdown signal received, stop current app iteration 00:06:17.784 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:06:17.784 spdk_app_start is called in Round 1. 00:06:17.784 Shutdown signal received, stop current app iteration 00:06:17.784 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:06:17.784 spdk_app_start is called in Round 2. 00:06:17.784 Shutdown signal received, stop current app iteration 00:06:17.784 Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 reinitialization... 00:06:17.784 spdk_app_start is called in Round 3. 00:06:17.784 Shutdown signal received, stop current app iteration 00:06:17.784 00:32:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.784 00:32:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.784 00:06:17.784 real 0m17.691s 00:06:17.784 user 0m39.212s 00:06:17.784 sys 0m2.854s 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.784 00:32:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 ************************************ 00:06:17.784 END TEST app_repeat 00:06:17.784 ************************************ 00:06:17.784 00:32:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.784 00:32:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.784 00:32:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.784 00:32:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.784 00:32:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.784 00:32:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 ************************************ 00:06:17.784 START TEST cpu_locks 00:06:17.784 ************************************ 00:06:17.784 00:32:35 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.784 * Looking for test storage... 00:06:17.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.784 00:32:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.784 00:32:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.784 00:32:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.784 00:32:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.784 00:32:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.784 00:32:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.784 00:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.784 ************************************ 00:06:17.784 START TEST default_locks 00:06:17.784 ************************************ 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2851394 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2851394 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2851394 ']' 00:06:17.784 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.785 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.785 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.785 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.785 00:32:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.043 [2024-07-16 00:32:35.645095] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:18.043 [2024-07-16 00:32:35.645146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851394 ] 00:06:18.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.043 [2024-07-16 00:32:35.727718] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.043 [2024-07-16 00:32:35.817904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.301 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.301 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:18.301 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2851394 00:06:18.301 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2851394 00:06:18.301 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.867 lslocks: write error 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2851394 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2851394 ']' 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2851394 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851394 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851394' 00:06:18.867 killing process with pid 2851394 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2851394 00:06:18.867 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2851394 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2851394 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2851394 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2851394 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2851394 ']' 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2851394) - No such process 00:06:19.124 ERROR: process (pid: 2851394) is no longer running 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.124 00:06:19.124 real 0m1.337s 00:06:19.124 user 0m1.338s 00:06:19.124 sys 0m0.608s 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.124 00:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.124 ************************************ 00:06:19.124 END TEST default_locks 00:06:19.124 ************************************ 00:06:19.382 00:32:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.382 00:32:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.382 00:32:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.382 00:32:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.382 00:32:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 ************************************ 00:06:19.382 START TEST default_locks_via_rpc 00:06:19.382 ************************************ 00:06:19.382 00:32:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:19.382 00:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2851692 00:06:19.382 00:32:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2851692 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2851692 ']' 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.382 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 [2024-07-16 00:32:37.050354] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:19.382 [2024-07-16 00:32:37.050405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851692 ] 00:06:19.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.382 [2024-07-16 00:32:37.134207] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.639 [2024-07-16 00:32:37.225175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.639 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2851692 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2851692 00:06:19.640 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2851692 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2851692 ']' 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2851692 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851692 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851692' 00:06:20.206 killing process with pid 2851692 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2851692 00:06:20.206 00:32:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2851692 00:06:20.465 00:06:20.465 real 0m1.166s 00:06:20.465 user 0m1.163s 00:06:20.465 sys 0m0.495s 00:06:20.465 00:32:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.465 00:32:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.465 ************************************ 00:06:20.465 END TEST default_locks_via_rpc 00:06:20.465 ************************************ 00:06:20.465 00:32:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.465 00:32:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.465 00:32:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.465 00:32:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.465 00:32:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.465 ************************************ 00:06:20.465 START TEST non_locking_app_on_locked_coremask 00:06:20.465 ************************************ 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2851924 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2851924 /var/tmp/spdk.sock 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2851924 ']' 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.465 00:32:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.465 [2024-07-16 00:32:38.291876] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:20.465 [2024-07-16 00:32:38.291929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851924 ] 00:06:20.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.724 [2024-07-16 00:32:38.372902] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.724 [2024-07-16 00:32:38.461973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2852067 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2852067 /var/tmp/spdk2.sock 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2852067 ']' 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.659 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.660 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.660 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.660 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.660 00:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.931 [2024-07-16 00:32:39.534910] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:21.931 [2024-07-16 00:32:39.534974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852067 ] 00:06:21.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.931 [2024-07-16 00:32:39.647405] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.931 [2024-07-16 00:32:39.647438] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.247 [2024-07-16 00:32:39.827101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.859 00:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.860 00:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.860 00:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2851924 00:06:22.860 00:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2851924 00:06:22.860 00:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.427 lslocks: write error 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2851924 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2851924 ']' 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2851924 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2851924 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2851924' 00:06:23.427 killing process with pid 2851924 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2851924 00:06:23.427 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2851924 00:06:23.994 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2852067 00:06:23.994 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2852067 ']' 00:06:23.994 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2852067 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2852067 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2852067' 00:06:24.252 killing process with pid 2852067 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2852067 00:06:24.252 00:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2852067 00:06:24.511 00:06:24.511 real 0m3.978s 00:06:24.511 user 0m4.671s 00:06:24.511 sys 0m1.133s 00:06:24.511 00:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.511 00:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.511 ************************************ 00:06:24.511 END TEST non_locking_app_on_locked_coremask 00:06:24.511 ************************************ 00:06:24.511 00:32:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.511 00:32:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.511 00:32:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.511 00:32:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.511 00:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.511 ************************************ 00:06:24.511 START TEST locking_app_on_unlocked_coremask 00:06:24.511 ************************************ 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2852581 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2852581 /var/tmp/spdk.sock 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2852581 ']' 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.511 00:32:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.511 [2024-07-16 00:32:42.339609] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:24.511 [2024-07-16 00:32:42.339660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852581 ] 00:06:24.770 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.770 [2024-07-16 00:32:42.422680] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.770 [2024-07-16 00:32:42.422711] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.770 [2024-07-16 00:32:42.512688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2852822 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2852822 /var/tmp/spdk2.sock 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2852822 ']' 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.706 00:32:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.964 [2024-07-16 00:32:43.583916] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:25.964 [2024-07-16 00:32:43.583978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852822 ] 00:06:25.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.964 [2024-07-16 00:32:43.696082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.222 [2024-07-16 00:32:43.870583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.158 00:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.158 00:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.158 00:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2852822 00:06:27.158 00:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2852822 00:06:27.158 00:32:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.726 lslocks: write error 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2852581 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2852581 ']' 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2852581 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2852581 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2852581' 00:06:27.726 killing process with pid 2852581 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2852581 00:06:27.726 00:32:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2852581 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2852822 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2852822 ']' 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2852822 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.293 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2852822 00:06:28.552 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.552 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.552 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2852822' 00:06:28.552 killing process with pid 2852822 00:06:28.552 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2852822 00:06:28.552 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2852822 00:06:28.811 00:06:28.811 real 0m4.189s 00:06:28.811 user 0m5.086s 00:06:28.811 sys 0m1.122s 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.811 ************************************ 00:06:28.811 END TEST locking_app_on_unlocked_coremask 00:06:28.811 ************************************ 00:06:28.811 00:32:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.811 00:32:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.811 00:32:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.811 00:32:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.811 00:32:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.811 ************************************ 00:06:28.811 START TEST locking_app_on_locked_coremask 00:06:28.811 ************************************ 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2853388 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2853388 /var/tmp/spdk.sock 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2853388 ']' 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.811 00:32:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.811 [2024-07-16 00:32:46.596037] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:28.811 [2024-07-16 00:32:46.596091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853388 ] 00:06:28.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.070 [2024-07-16 00:32:46.679151] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.070 [2024-07-16 00:32:46.770150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.006 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.006 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.006 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2853647 00:06:30.006 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2853647 /var/tmp/spdk2.sock 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2853647 /var/tmp/spdk2.sock 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2853647 /var/tmp/spdk2.sock 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2853647 ']' 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.007 00:32:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.265 [2024-07-16 00:32:47.875414] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:30.265 [2024-07-16 00:32:47.875528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853647 ] 00:06:30.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.265 [2024-07-16 00:32:48.022080] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2853388 has claimed it. 00:06:30.265 [2024-07-16 00:32:48.022126] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2853647) - No such process 00:06:30.832 ERROR: process (pid: 2853647) is no longer running 00:06:30.832 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.832 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:30.832 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.832 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.832 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.833 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.833 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2853388 00:06:30.833 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2853388 00:06:30.833 00:32:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.400 lslocks: write error 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2853388 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2853388 ']' 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2853388 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2853388 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2853388' 00:06:31.400 killing process with pid 2853388 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2853388 00:06:31.400 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2853388 00:06:31.659 00:06:31.659 real 0m2.888s 00:06:31.659 user 0m3.569s 00:06:31.659 sys 0m0.810s 00:06:31.659 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.659 00:32:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.659 ************************************ 00:06:31.659 END TEST locking_app_on_locked_coremask 00:06:31.659 ************************************ 00:06:31.659 00:32:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.659 00:32:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.659 00:32:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.659 00:32:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.659 00:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.659 ************************************ 00:06:31.659 START TEST locking_overlapped_coremask 00:06:31.659 ************************************ 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2853964 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2853964 /var/tmp/spdk.sock 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2853964 ']' 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.659 00:32:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 [2024-07-16 00:32:49.549446] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:31.918 [2024-07-16 00:32:49.549500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853964 ] 00:06:31.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.918 [2024-07-16 00:32:49.632325] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.918 [2024-07-16 00:32:49.724389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.918 [2024-07-16 00:32:49.724504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.918 [2024-07-16 00:32:49.724505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2854206 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2854206 /var/tmp/spdk2.sock 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2854206 /var/tmp/spdk2.sock 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2854206 /var/tmp/spdk2.sock 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2854206 ']' 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.855 00:32:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.855 [2024-07-16 00:32:50.477911] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:32.855 [2024-07-16 00:32:50.477971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854206 ] 00:06:32.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.855 [2024-07-16 00:32:50.678265] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2853964 has claimed it. 00:06:32.855 [2024-07-16 00:32:50.678353] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2854206) - No such process 00:06:33.423 ERROR: process (pid: 2854206) is no longer running 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2853964 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2853964 ']' 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2853964 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2853964 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2853964' 00:06:33.423 killing process with pid 2853964 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2853964 00:06:33.423 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2853964 00:06:33.991 00:06:33.991 real 0m2.070s 00:06:33.991 user 0m5.825s 00:06:33.991 sys 0m0.496s 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.991 ************************************ 00:06:33.991 END TEST locking_overlapped_coremask 00:06:33.991 ************************************ 00:06:33.991 00:32:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.991 00:32:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.991 00:32:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.991 00:32:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.991 00:32:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.991 ************************************ 00:06:33.991 START TEST locking_overlapped_coremask_via_rpc 00:06:33.991 ************************************ 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2854496 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2854496 /var/tmp/spdk.sock 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2854496 ']' 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.991 00:32:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.991 [2024-07-16 00:32:51.690804] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:33.991 [2024-07-16 00:32:51.690862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854496 ] 00:06:33.991 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.991 [2024-07-16 00:32:51.775144] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.991 [2024-07-16 00:32:51.775177] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.249 [2024-07-16 00:32:51.859645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.249 [2024-07-16 00:32:51.859758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.249 [2024-07-16 00:32:51.859759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2854640 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2854640 /var/tmp/spdk2.sock 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2854640 ']' 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.817 00:32:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.817 [2024-07-16 00:32:52.610584] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:34.817 [2024-07-16 00:32:52.610648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854640 ] 00:06:34.817 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.076 [2024-07-16 00:32:52.808424] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.076 [2024-07-16 00:32:52.808486] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.335 [2024-07-16 00:32:53.105613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.335 [2024-07-16 00:32:53.109312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:35.335 [2024-07-16 00:32:53.109316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.901 [2024-07-16 00:32:53.637441] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2854496 has claimed it. 00:06:35.901 request: 00:06:35.901 { 00:06:35.901 "method": "framework_enable_cpumask_locks", 00:06:35.901 "req_id": 1 00:06:35.901 } 00:06:35.901 Got JSON-RPC error response 00:06:35.901 response: 00:06:35.901 { 00:06:35.901 "code": -32603, 00:06:35.901 "message": "Failed to claim CPU core: 2" 00:06:35.901 } 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2854496 /var/tmp/spdk.sock 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2854496 ']' 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.901 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2854640 /var/tmp/spdk2.sock 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2854640 ']' 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.160 00:32:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.418 00:06:36.418 real 0m2.526s 00:06:36.418 user 0m1.188s 00:06:36.418 sys 0m0.200s 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.418 00:32:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.418 ************************************ 00:06:36.418 END TEST locking_overlapped_coremask_via_rpc 00:06:36.418 ************************************ 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:36.418 00:32:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:36.418 00:32:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2854496 ]] 00:06:36.418 00:32:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2854496 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2854496 ']' 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2854496 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2854496 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2854496' 00:06:36.418 killing process with pid 2854496 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2854496 00:06:36.418 00:32:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2854496 00:06:36.986 00:32:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2854640 ]] 00:06:36.986 00:32:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2854640 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2854640 ']' 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2854640 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2854640 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2854640' 00:06:36.987 killing process with pid 2854640 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2854640 00:06:36.987 00:32:54 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2854640 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2854496 ]] 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2854496 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2854496 ']' 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2854496 00:06:37.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2854496) - No such process 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2854496 is not found' 00:06:37.555 Process with pid 2854496 is not found 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2854640 ]] 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2854640 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2854640 ']' 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2854640 00:06:37.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2854640) - No such process 00:06:37.555 00:32:55 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2854640 is not found' 00:06:37.555 Process with pid 2854640 is not found 00:06:37.555 00:32:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:37.556 00:06:37.556 real 0m19.659s 00:06:37.556 user 0m35.077s 00:06:37.556 sys 0m5.904s 00:06:37.556 00:32:55 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.556 00:32:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 END TEST cpu_locks 00:06:37.556 ************************************ 00:06:37.556 00:32:55 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.556 00:06:37.556 real 0m47.055s 00:06:37.556 user 1m31.883s 00:06:37.556 sys 0m9.863s 00:06:37.556 00:32:55 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.556 00:32:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 END TEST event 00:06:37.556 ************************************ 00:06:37.556 00:32:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.556 00:32:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.556 00:32:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.556 00:32:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.556 00:32:55 -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 START TEST thread 00:06:37.556 ************************************ 00:06:37.556 00:32:55 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:37.556 * Looking for test storage... 00:06:37.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:37.556 00:32:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.556 00:32:55 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:37.556 00:32:55 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.556 00:32:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 START TEST thread_poller_perf 00:06:37.556 ************************************ 00:06:37.556 00:32:55 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:37.815 [2024-07-16 00:32:55.397611] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:37.815 [2024-07-16 00:32:55.397677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855214 ] 00:06:37.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.815 [2024-07-16 00:32:55.482405] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.815 [2024-07-16 00:32:55.581059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.815 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.193 ====================================== 00:06:39.193 busy:2211745100 (cyc) 00:06:39.193 total_run_count: 256000 00:06:39.193 tsc_hz: 2200000000 (cyc) 00:06:39.193 ====================================== 00:06:39.193 poller_cost: 8639 (cyc), 3926 (nsec) 00:06:39.193 00:06:39.193 real 0m1.292s 00:06:39.193 user 0m1.196s 00:06:39.193 sys 0m0.090s 00:06:39.193 00:32:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.193 00:32:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 ************************************ 00:06:39.193 END TEST thread_poller_perf 00:06:39.193 ************************************ 00:06:39.193 00:32:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:39.193 00:32:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.193 00:32:56 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:39.193 00:32:56 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.193 00:32:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.193 ************************************ 00:06:39.193 START TEST thread_poller_perf 00:06:39.193 ************************************ 00:06:39.193 00:32:56 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:39.193 [2024-07-16 00:32:56.762232] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:39.193 [2024-07-16 00:32:56.762342] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855435 ] 00:06:39.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.193 [2024-07-16 00:32:56.847583] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.193 [2024-07-16 00:32:56.936301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.193 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:40.573 ====================================== 00:06:40.573 busy:2202231044 (cyc) 00:06:40.573 total_run_count: 3375000 00:06:40.573 tsc_hz: 2200000000 (cyc) 00:06:40.573 ====================================== 00:06:40.573 poller_cost: 652 (cyc), 296 (nsec) 00:06:40.573 00:06:40.573 real 0m1.272s 00:06:40.573 user 0m1.174s 00:06:40.573 sys 0m0.092s 00:06:40.573 00:32:58 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.573 00:32:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 ************************************ 00:06:40.573 END TEST thread_poller_perf 00:06:40.573 ************************************ 00:06:40.573 00:32:58 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:40.573 00:32:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:40.573 00:06:40.573 real 0m2.799s 00:06:40.573 user 0m2.462s 00:06:40.573 sys 0m0.340s 00:06:40.573 00:32:58 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.573 00:32:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 ************************************ 00:06:40.573 END TEST thread 00:06:40.573 ************************************ 00:06:40.573 00:32:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.573 00:32:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:40.573 00:32:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.573 00:32:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.573 00:32:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 ************************************ 00:06:40.573 START TEST accel 00:06:40.573 ************************************ 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:40.573 * Looking for test storage... 00:06:40.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.573 00:32:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:40.573 00:32:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:40.573 00:32:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.573 00:32:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2855768 00:06:40.573 00:32:58 accel -- accel/accel.sh@63 -- # waitforlisten 2855768 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@829 -- # '[' -z 2855768 ']' 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.573 00:32:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.573 00:32:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.573 00:32:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.573 00:32:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 00:32:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.573 00:32:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.573 00:32:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.573 00:32:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.573 00:32:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.573 00:32:58 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.573 [2024-07-16 00:32:58.272787] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:40.573 [2024-07-16 00:32:58.272853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855768 ] 00:06:40.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.573 [2024-07-16 00:32:58.356887] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.832 [2024-07-16 00:32:58.448322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.399 00:32:59 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.399 00:32:59 accel -- common/autotest_common.sh@862 -- # return 0 00:06:41.399 00:32:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:41.399 00:32:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:41.399 00:32:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:41.399 00:32:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:41.399 00:32:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:41.399 00:32:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:41.399 00:32:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:41.399 00:32:59 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.399 00:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.399 00:32:59 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.658 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.658 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.658 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:41.659 00:32:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:41.659 00:32:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:41.659 00:32:59 accel -- accel/accel.sh@75 -- # killprocess 2855768 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@948 -- # '[' -z 2855768 ']' 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@952 -- # kill -0 2855768 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@953 -- # uname 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2855768 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2855768' 00:06:41.659 killing process with pid 2855768 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@967 -- # kill 2855768 00:06:41.659 00:32:59 accel -- common/autotest_common.sh@972 -- # wait 2855768 00:06:41.918 00:32:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:41.918 00:32:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 00:32:59 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:41.918 00:32:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:41.918 00:32:59 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.918 00:32:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.918 00:32:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.918 00:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.178 ************************************ 00:06:42.178 START TEST accel_missing_filename 00:06:42.178 ************************************ 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.178 00:32:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:42.178 00:32:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:42.178 [2024-07-16 00:32:59.812247] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:42.178 [2024-07-16 00:32:59.812318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856144 ] 00:06:42.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.178 [2024-07-16 00:32:59.897201] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.178 [2024-07-16 00:32:59.989716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.437 [2024-07-16 00:33:00.035748] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.437 [2024-07-16 00:33:00.099339] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:42.437 A filename is required. 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.437 00:06:42.437 real 0m0.395s 00:06:42.437 user 0m0.288s 00:06:42.437 sys 0m0.148s 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.437 00:33:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:42.437 ************************************ 00:06:42.437 END TEST accel_missing_filename 00:06:42.437 ************************************ 00:06:42.437 00:33:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.437 00:33:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.437 00:33:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:42.437 00:33:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.437 00:33:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.437 ************************************ 00:06:42.437 START TEST accel_compress_verify 00:06:42.437 ************************************ 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.437 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.437 00:33:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:42.437 00:33:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:42.438 00:33:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:42.438 [2024-07-16 00:33:00.276295] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:42.438 [2024-07-16 00:33:00.276348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856340 ] 00:06:42.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.697 [2024-07-16 00:33:00.359259] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.697 [2024-07-16 00:33:00.448263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.697 [2024-07-16 00:33:00.492724] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.956 [2024-07-16 00:33:00.554893] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:42.957 00:06:42.957 Compression does not support the verify option, aborting. 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.957 00:06:42.957 real 0m0.388s 00:06:42.957 user 0m0.290s 00:06:42.957 sys 0m0.138s 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.957 00:33:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:42.957 ************************************ 00:06:42.957 END TEST accel_compress_verify 00:06:42.957 ************************************ 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.957 00:33:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.957 ************************************ 00:06:42.957 START TEST accel_wrong_workload 00:06:42.957 ************************************ 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:42.957 00:33:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:42.957 Unsupported workload type: foobar 00:06:42.957 [2024-07-16 00:33:00.734758] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:42.957 accel_perf options: 00:06:42.957 [-h help message] 00:06:42.957 [-q queue depth per core] 00:06:42.957 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:42.957 [-T number of threads per core 00:06:42.957 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:42.957 [-t time in seconds] 00:06:42.957 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:42.957 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:42.957 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:42.957 [-l for compress/decompress workloads, name of uncompressed input file 00:06:42.957 [-S for crc32c workload, use this seed value (default 0) 00:06:42.957 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:42.957 [-f for fill workload, use this BYTE value (default 255) 00:06:42.957 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:42.957 [-y verify result if this switch is on] 00:06:42.957 [-a tasks to allocate per core (default: same value as -q)] 00:06:42.957 Can be used to spread operations across a wider range of memory. 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.957 00:06:42.957 real 0m0.035s 00:06:42.957 user 0m0.024s 00:06:42.957 sys 0m0.011s 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.957 00:33:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:42.957 ************************************ 00:06:42.957 END TEST accel_wrong_workload 00:06:42.957 ************************************ 00:06:42.957 Error: writing output failed: Broken pipe 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.957 00:33:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.957 00:33:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.217 ************************************ 00:06:43.217 START TEST accel_negative_buffers 00:06:43.217 ************************************ 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:43.217 00:33:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:43.217 -x option must be non-negative. 00:06:43.217 [2024-07-16 00:33:00.840541] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:43.217 accel_perf options: 00:06:43.217 [-h help message] 00:06:43.217 [-q queue depth per core] 00:06:43.217 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:43.217 [-T number of threads per core 00:06:43.217 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:43.217 [-t time in seconds] 00:06:43.217 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:43.217 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:43.217 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:43.217 [-l for compress/decompress workloads, name of uncompressed input file 00:06:43.217 [-S for crc32c workload, use this seed value (default 0) 00:06:43.217 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:43.217 [-f for fill workload, use this BYTE value (default 255) 00:06:43.217 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:43.217 [-y verify result if this switch is on] 00:06:43.217 [-a tasks to allocate per core (default: same value as -q)] 00:06:43.217 Can be used to spread operations across a wider range of memory. 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.217 00:06:43.217 real 0m0.034s 00:06:43.217 user 0m0.021s 00:06:43.217 sys 0m0.013s 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.217 00:33:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:43.217 ************************************ 00:06:43.217 END TEST accel_negative_buffers 00:06:43.217 ************************************ 00:06:43.217 Error: writing output failed: Broken pipe 00:06:43.217 00:33:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.217 00:33:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:43.217 00:33:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.217 00:33:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.217 00:33:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.217 ************************************ 00:06:43.217 START TEST accel_crc32c 00:06:43.217 ************************************ 00:06:43.217 00:33:00 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:43.217 00:33:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:43.217 [2024-07-16 00:33:00.937270] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:43.217 [2024-07-16 00:33:00.937341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856436 ] 00:06:43.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.217 [2024-07-16 00:33:01.028047] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.477 [2024-07-16 00:33:01.118891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.477 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.478 00:33:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.478 00:33:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.478 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.478 00:33:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:44.856 00:33:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.856 00:06:44.856 real 0m1.404s 00:06:44.856 user 0m1.270s 00:06:44.856 sys 0m0.146s 00:06:44.856 00:33:02 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.856 00:33:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:44.856 ************************************ 00:06:44.856 END TEST accel_crc32c 00:06:44.856 ************************************ 00:06:44.856 00:33:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.856 00:33:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:44.856 00:33:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.856 00:33:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.856 00:33:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.856 ************************************ 00:06:44.856 START TEST accel_crc32c_C2 00:06:44.856 ************************************ 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:44.856 [2024-07-16 00:33:02.410190] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:44.856 [2024-07-16 00:33:02.410268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2856791 ] 00:06:44.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.856 [2024-07-16 00:33:02.494722] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.856 [2024-07-16 00:33:02.585176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.856 00:33:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.233 00:06:46.233 real 0m1.401s 00:06:46.233 user 0m1.277s 00:06:46.233 sys 0m0.135s 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.233 00:33:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:46.233 ************************************ 00:06:46.233 END TEST accel_crc32c_C2 00:06:46.233 ************************************ 00:06:46.233 00:33:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.233 00:33:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:46.233 00:33:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.233 00:33:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.233 00:33:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.233 ************************************ 00:06:46.233 START TEST accel_copy 00:06:46.233 ************************************ 00:06:46.233 00:33:03 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:46.233 00:33:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:46.233 [2024-07-16 00:33:03.874284] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:46.233 [2024-07-16 00:33:03.874338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857070 ] 00:06:46.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.233 [2024-07-16 00:33:03.956416] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.233 [2024-07-16 00:33:04.044430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.491 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.492 00:33:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:47.427 00:33:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.427 00:06:47.427 real 0m1.393s 00:06:47.427 user 0m1.264s 00:06:47.427 sys 0m0.140s 00:06:47.427 00:33:05 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.427 00:33:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.427 ************************************ 00:06:47.427 END TEST accel_copy 00:06:47.427 ************************************ 00:06:47.685 00:33:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.685 00:33:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.685 00:33:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:47.685 00:33:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.685 00:33:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.685 ************************************ 00:06:47.685 START TEST accel_fill 00:06:47.685 ************************************ 00:06:47.685 00:33:05 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:47.685 00:33:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:47.685 [2024-07-16 00:33:05.336024] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:47.685 [2024-07-16 00:33:05.336086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857356 ] 00:06:47.685 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.685 [2024-07-16 00:33:05.418365] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.685 [2024-07-16 00:33:05.506822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.944 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:47.945 00:33:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:48.881 00:33:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.881 00:06:48.882 real 0m1.395s 00:06:48.882 user 0m1.271s 00:06:48.882 sys 0m0.136s 00:06:48.882 00:33:06 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.882 00:33:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:48.882 ************************************ 00:06:48.882 END TEST accel_fill 00:06:48.882 ************************************ 00:06:49.140 00:33:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.140 00:33:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:49.141 00:33:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.141 00:33:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.141 00:33:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.141 ************************************ 00:06:49.141 START TEST accel_copy_crc32c 00:06:49.141 ************************************ 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:49.141 00:33:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:49.141 [2024-07-16 00:33:06.802310] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:49.141 [2024-07-16 00:33:06.802413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857635 ] 00:06:49.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.141 [2024-07-16 00:33:06.922640] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.400 [2024-07-16 00:33:07.020394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.400 00:33:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.808 00:06:50.808 real 0m1.448s 00:06:50.808 user 0m1.283s 00:06:50.808 sys 0m0.177s 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.808 00:33:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:50.808 ************************************ 00:06:50.808 END TEST accel_copy_crc32c 00:06:50.808 ************************************ 00:06:50.808 00:33:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.808 00:33:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.808 00:33:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.808 00:33:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.808 00:33:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.808 ************************************ 00:06:50.808 START TEST accel_copy_crc32c_C2 00:06:50.808 ************************************ 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:50.808 [2024-07-16 00:33:08.311352] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:50.808 [2024-07-16 00:33:08.311405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858048 ] 00:06:50.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.808 [2024-07-16 00:33:08.393863] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.808 [2024-07-16 00:33:08.481536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.808 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.809 00:33:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.191 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.192 00:06:52.192 real 0m1.392s 00:06:52.192 user 0m1.271s 00:06:52.192 sys 0m0.133s 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.192 00:33:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:52.192 ************************************ 00:06:52.192 END TEST accel_copy_crc32c_C2 00:06:52.192 ************************************ 00:06:52.192 00:33:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.192 00:33:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:52.192 00:33:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.192 00:33:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.192 00:33:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.192 ************************************ 00:06:52.192 START TEST accel_dualcast 00:06:52.192 ************************************ 00:06:52.192 00:33:09 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:52.192 00:33:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:52.192 [2024-07-16 00:33:09.782785] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:52.192 [2024-07-16 00:33:09.782905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858569 ] 00:06:52.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.192 [2024-07-16 00:33:09.903918] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.192 [2024-07-16 00:33:10.001065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:52.451 00:33:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:53.386 00:33:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:53.387 00:33:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.387 00:33:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:53.387 00:33:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.387 00:06:53.387 real 0m1.449s 00:06:53.387 user 0m1.283s 00:06:53.387 sys 0m0.177s 00:06:53.387 00:33:11 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.387 00:33:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:53.387 ************************************ 00:06:53.387 END TEST accel_dualcast 00:06:53.387 ************************************ 00:06:53.643 00:33:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.643 00:33:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:53.643 00:33:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:53.643 00:33:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.643 00:33:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.643 ************************************ 00:06:53.643 START TEST accel_compare 00:06:53.643 ************************************ 00:06:53.643 00:33:11 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:53.643 00:33:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:53.643 [2024-07-16 00:33:11.296904] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:53.643 [2024-07-16 00:33:11.296958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858885 ] 00:06:53.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.643 [2024-07-16 00:33:11.373460] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.643 [2024-07-16 00:33:11.468518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:53.900 00:33:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:54.835 00:33:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.835 00:06:54.835 real 0m1.393s 00:06:54.835 user 0m1.264s 00:06:54.835 sys 0m0.142s 00:06:54.835 00:33:12 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.835 00:33:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:54.835 ************************************ 00:06:54.835 END TEST accel_compare 00:06:54.835 ************************************ 00:06:55.093 00:33:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.093 00:33:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:55.093 00:33:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.093 00:33:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.093 00:33:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.093 ************************************ 00:06:55.093 START TEST accel_xor 00:06:55.093 ************************************ 00:06:55.093 00:33:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:55.093 00:33:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:55.094 00:33:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:55.094 [2024-07-16 00:33:12.758150] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:55.094 [2024-07-16 00:33:12.758209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859173 ] 00:06:55.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.094 [2024-07-16 00:33:12.841960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.094 [2024-07-16 00:33:12.931866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:55.371 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:55.372 00:33:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:56.306 00:33:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.306 00:06:56.306 real 0m1.394s 00:06:56.306 user 0m1.273s 00:06:56.306 sys 0m0.134s 00:06:56.306 00:33:14 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.306 00:33:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:56.306 ************************************ 00:06:56.306 END TEST accel_xor 00:06:56.306 ************************************ 00:06:56.564 00:33:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.564 00:33:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:56.564 00:33:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.564 00:33:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.564 00:33:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.564 ************************************ 00:06:56.564 START TEST accel_xor 00:06:56.564 ************************************ 00:06:56.564 00:33:14 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:56.564 00:33:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:56.564 [2024-07-16 00:33:14.227127] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:56.564 [2024-07-16 00:33:14.227186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859460 ] 00:06:56.564 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.564 [2024-07-16 00:33:14.309890] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.564 [2024-07-16 00:33:14.398318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:56.823 00:33:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:57.759 00:33:15 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.759 00:06:57.759 real 0m1.394s 00:06:57.759 user 0m1.274s 00:06:57.759 sys 0m0.134s 00:06:57.759 00:33:15 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.759 00:33:15 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:57.759 ************************************ 00:06:57.759 END TEST accel_xor 00:06:57.759 ************************************ 00:06:58.018 00:33:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.018 00:33:15 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:58.018 00:33:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:58.018 00:33:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.018 00:33:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.018 ************************************ 00:06:58.018 START TEST accel_dif_verify 00:06:58.018 ************************************ 00:06:58.018 00:33:15 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.018 00:33:15 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.018 [2024-07-16 00:33:15.688963] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:58.018 [2024-07-16 00:33:15.689028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859755 ] 00:06:58.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.018 [2024-07-16 00:33:15.773786] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.277 [2024-07-16 00:33:15.861526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:58.277 00:33:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.214 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.214 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.215 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:59.474 00:33:17 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.474 00:06:59.474 real 0m1.395s 00:06:59.474 user 0m1.278s 00:06:59.474 sys 0m0.133s 00:06:59.474 00:33:17 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.474 00:33:17 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:59.474 ************************************ 00:06:59.474 END TEST accel_dif_verify 00:06:59.474 ************************************ 00:06:59.474 00:33:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.474 00:33:17 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:59.474 00:33:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:59.474 00:33:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.474 00:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.474 ************************************ 00:06:59.474 START TEST accel_dif_generate 00:06:59.474 ************************************ 00:06:59.474 00:33:17 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:59.474 00:33:17 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:59.474 [2024-07-16 00:33:17.157588] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:06:59.474 [2024-07-16 00:33:17.157691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860041 ] 00:06:59.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.474 [2024-07-16 00:33:17.273664] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.733 [2024-07-16 00:33:17.365014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.733 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:59.734 00:33:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:01.113 00:33:18 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.113 00:07:01.113 real 0m1.435s 00:07:01.113 user 0m1.283s 00:07:01.113 sys 0m0.168s 00:07:01.113 00:33:18 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.113 00:33:18 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:01.113 ************************************ 00:07:01.113 END TEST accel_dif_generate 00:07:01.113 ************************************ 00:07:01.113 00:33:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.113 00:33:18 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:01.113 00:33:18 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:01.113 00:33:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.113 00:33:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.113 ************************************ 00:07:01.113 START TEST accel_dif_generate_copy 00:07:01.113 ************************************ 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.113 [2024-07-16 00:33:18.653704] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:01.113 [2024-07-16 00:33:18.653767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860364 ] 00:07:01.113 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.113 [2024-07-16 00:33:18.737265] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.113 [2024-07-16 00:33:18.824878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.113 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.114 00:33:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.491 00:07:02.491 real 0m1.393s 00:07:02.491 user 0m1.279s 00:07:02.491 sys 0m0.128s 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.491 00:33:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.491 ************************************ 00:07:02.491 END TEST accel_dif_generate_copy 00:07:02.491 ************************************ 00:07:02.491 00:33:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.491 00:33:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:02.491 00:33:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.491 00:33:20 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:02.491 00:33:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.491 00:33:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.491 ************************************ 00:07:02.491 START TEST accel_comp 00:07:02.491 ************************************ 00:07:02.491 00:33:20 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:02.491 00:33:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:02.491 [2024-07-16 00:33:20.119482] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:02.492 [2024-07-16 00:33:20.119536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860654 ] 00:07:02.492 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.492 [2024-07-16 00:33:20.200951] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.492 [2024-07-16 00:33:20.288596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.751 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:02.752 00:33:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:03.687 00:33:21 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.687 00:07:03.687 real 0m1.393s 00:07:03.687 user 0m1.278s 00:07:03.687 sys 0m0.130s 00:07:03.687 00:33:21 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.687 00:33:21 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:03.687 ************************************ 00:07:03.687 END TEST accel_comp 00:07:03.687 ************************************ 00:07:03.687 00:33:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.687 00:33:21 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.687 00:33:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.687 00:33:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.687 00:33:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.946 ************************************ 00:07:03.946 START TEST accel_decomp 00:07:03.946 ************************************ 00:07:03.946 00:33:21 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:03.946 00:33:21 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:03.946 [2024-07-16 00:33:21.582079] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:03.946 [2024-07-16 00:33:21.582148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860948 ] 00:07:03.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.946 [2024-07-16 00:33:21.654328] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.946 [2024-07-16 00:33:21.746595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:04.205 00:33:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.140 00:33:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.140 00:07:05.140 real 0m1.390s 00:07:05.140 user 0m1.280s 00:07:05.140 sys 0m0.125s 00:07:05.140 00:33:22 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.140 00:33:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:05.140 ************************************ 00:07:05.140 END TEST accel_decomp 00:07:05.140 ************************************ 00:07:05.402 00:33:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.402 00:33:22 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.402 00:33:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:05.402 00:33:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.402 00:33:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.402 ************************************ 00:07:05.402 START TEST accel_decomp_full 00:07:05.402 ************************************ 00:07:05.402 00:33:23 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:05.402 00:33:23 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:05.402 [2024-07-16 00:33:23.036511] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:05.402 [2024-07-16 00:33:23.036563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861235 ] 00:07:05.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.402 [2024-07-16 00:33:23.118547] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.402 [2024-07-16 00:33:23.206990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.660 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 00:33:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.596 00:33:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.596 00:07:06.596 real 0m1.409s 00:07:06.596 user 0m1.290s 00:07:06.596 sys 0m0.134s 00:07:06.596 00:33:24 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.596 00:33:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:06.596 ************************************ 00:07:06.596 END TEST accel_decomp_full 00:07:06.596 ************************************ 00:07:06.855 00:33:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.855 00:33:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.855 00:33:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:06.855 00:33:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.855 00:33:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.855 ************************************ 00:07:06.855 START TEST accel_decomp_mcore 00:07:06.855 ************************************ 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:06.855 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:06.855 [2024-07-16 00:33:24.514469] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:06.855 [2024-07-16 00:33:24.514520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861535 ] 00:07:06.855 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.855 [2024-07-16 00:33:24.597208] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.855 [2024-07-16 00:33:24.688627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.855 [2024-07-16 00:33:24.688741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.855 [2024-07-16 00:33:24.688853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.855 [2024-07-16 00:33:24.688853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:07.115 00:33:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.492 00:07:08.492 real 0m1.414s 00:07:08.492 user 0m4.643s 00:07:08.492 sys 0m0.153s 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.492 00:33:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:08.492 ************************************ 00:07:08.492 END TEST accel_decomp_mcore 00:07:08.492 ************************************ 00:07:08.492 00:33:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.492 00:33:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.492 00:33:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.492 00:33:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.492 00:33:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.492 ************************************ 00:07:08.492 START TEST accel_decomp_full_mcore 00:07:08.492 ************************************ 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:08.492 00:33:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:08.492 [2024-07-16 00:33:25.997869] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:08.492 [2024-07-16 00:33:25.997929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861841 ] 00:07:08.492 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.492 [2024-07-16 00:33:26.078823] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.492 [2024-07-16 00:33:26.169879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.492 [2024-07-16 00:33:26.169920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.492 [2024-07-16 00:33:26.170031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.492 [2024-07-16 00:33:26.170032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:08.492 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:08.493 00:33:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.871 00:07:09.871 real 0m1.436s 00:07:09.871 user 0m4.742s 00:07:09.871 sys 0m0.144s 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.871 00:33:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:09.871 ************************************ 00:07:09.871 END TEST accel_decomp_full_mcore 00:07:09.871 ************************************ 00:07:09.871 00:33:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.871 00:33:27 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.871 00:33:27 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:09.871 00:33:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.871 00:33:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.871 ************************************ 00:07:09.871 START TEST accel_decomp_mthread 00:07:09.871 ************************************ 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:09.871 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:09.871 [2024-07-16 00:33:27.502338] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:09.871 [2024-07-16 00:33:27.502393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862159 ] 00:07:09.871 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.871 [2024-07-16 00:33:27.584558] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.871 [2024-07-16 00:33:27.672210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:10.130 00:33:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:11.066 00:33:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.066 00:07:11.067 real 0m1.400s 00:07:11.067 user 0m1.277s 00:07:11.067 sys 0m0.138s 00:07:11.067 00:33:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.067 00:33:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:11.067 ************************************ 00:07:11.067 END TEST accel_decomp_mthread 00:07:11.067 ************************************ 00:07:11.326 00:33:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.326 00:33:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.326 00:33:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:11.326 00:33:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.326 00:33:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.326 ************************************ 00:07:11.326 START TEST accel_decomp_full_mthread 00:07:11.326 ************************************ 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:11.326 00:33:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:11.326 [2024-07-16 00:33:28.971624] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:11.326 [2024-07-16 00:33:28.971679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862452 ] 00:07:11.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.326 [2024-07-16 00:33:29.054076] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.326 [2024-07-16 00:33:29.142073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.584 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.584 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:11.585 00:33:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.964 00:07:12.964 real 0m1.438s 00:07:12.964 user 0m1.318s 00:07:12.964 sys 0m0.134s 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.964 00:33:30 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:12.964 ************************************ 00:07:12.964 END TEST accel_decomp_full_mthread 00:07:12.964 ************************************ 00:07:12.964 00:33:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.964 00:33:30 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:12.964 00:33:30 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.964 00:33:30 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.964 00:33:30 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:12.964 00:33:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.965 00:33:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.965 00:33:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.965 00:33:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.965 00:33:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.965 00:33:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.965 00:33:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.965 00:33:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:12.965 00:33:30 accel -- accel/accel.sh@41 -- # jq -r . 00:07:12.965 ************************************ 00:07:12.965 START TEST accel_dif_functional_tests 00:07:12.965 ************************************ 00:07:12.965 00:33:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:12.965 [2024-07-16 00:33:30.503292] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:12.965 [2024-07-16 00:33:30.503349] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862771 ] 00:07:12.965 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.965 [2024-07-16 00:33:30.586613] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.965 [2024-07-16 00:33:30.675579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.965 [2024-07-16 00:33:30.675692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.965 [2024-07-16 00:33:30.675692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.965 00:07:12.965 00:07:12.965 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.965 http://cunit.sourceforge.net/ 00:07:12.965 00:07:12.965 00:07:12.965 Suite: accel_dif 00:07:12.965 Test: verify: DIF generated, GUARD check ...passed 00:07:12.965 Test: verify: DIF generated, APPTAG check ...passed 00:07:12.965 Test: verify: DIF generated, REFTAG check ...passed 00:07:12.965 Test: verify: DIF not generated, GUARD check ...[2024-07-16 00:33:30.751897] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.965 passed 00:07:12.965 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 00:33:30.751965] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.965 passed 00:07:12.965 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 00:33:30.751995] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.965 passed 00:07:12.965 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:12.965 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 00:33:30.752066] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:12.965 passed 00:07:12.965 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:12.965 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:12.965 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:12.965 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 00:33:30.752228] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:12.965 passed 00:07:12.965 Test: verify copy: DIF generated, GUARD check ...passed 00:07:12.965 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:12.965 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:12.965 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 00:33:30.752622] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:12.965 passed 00:07:12.965 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 00:33:30.752660] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:12.965 passed 00:07:12.965 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 00:33:30.752695] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:12.965 passed 00:07:12.965 Test: generate copy: DIF generated, GUARD check ...passed 00:07:12.965 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:12.965 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:12.965 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:12.965 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:12.965 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:12.965 Test: generate copy: iovecs-len validate ...[2024-07-16 00:33:30.752963] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:12.965 passed 00:07:12.965 Test: generate copy: buffer alignment validate ...passed 00:07:12.965 00:07:12.965 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.965 suites 1 1 n/a 0 0 00:07:12.965 tests 26 26 26 0 0 00:07:12.965 asserts 115 115 115 0 n/a 00:07:12.965 00:07:12.965 Elapsed time = 0.002 seconds 00:07:13.224 00:07:13.225 real 0m0.484s 00:07:13.225 user 0m0.691s 00:07:13.225 sys 0m0.176s 00:07:13.225 00:33:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.225 00:33:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:13.225 ************************************ 00:07:13.225 END TEST accel_dif_functional_tests 00:07:13.225 ************************************ 00:07:13.225 00:33:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.225 00:07:13.225 real 0m32.852s 00:07:13.225 user 0m36.281s 00:07:13.225 sys 0m4.987s 00:07:13.225 00:33:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.225 00:33:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.225 ************************************ 00:07:13.225 END TEST accel 00:07:13.225 ************************************ 00:07:13.225 00:33:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.225 00:33:31 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.225 00:33:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.225 00:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.225 00:33:31 -- common/autotest_common.sh@10 -- # set +x 00:07:13.225 ************************************ 00:07:13.225 START TEST accel_rpc 00:07:13.225 ************************************ 00:07:13.225 00:33:31 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:13.483 * Looking for test storage... 00:07:13.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:13.483 00:33:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.483 00:33:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2862855 00:07:13.483 00:33:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2862855 00:07:13.483 00:33:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2862855 ']' 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.483 00:33:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.483 [2024-07-16 00:33:31.188535] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:13.483 [2024-07-16 00:33:31.188591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862855 ] 00:07:13.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.483 [2024-07-16 00:33:31.273932] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.742 [2024-07-16 00:33:31.363835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.310 00:33:32 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.310 00:33:32 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.310 00:33:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:14.310 00:33:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:14.310 00:33:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:14.310 00:33:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:14.310 00:33:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:14.310 00:33:32 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.310 00:33:32 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.310 00:33:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.569 ************************************ 00:07:14.569 START TEST accel_assign_opcode 00:07:14.569 ************************************ 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.569 [2024-07-16 00:33:32.166274] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.569 [2024-07-16 00:33:32.174285] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.569 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.828 software 00:07:14.828 00:07:14.828 real 0m0.255s 00:07:14.828 user 0m0.052s 00:07:14.828 sys 0m0.008s 00:07:14.828 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.828 00:33:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:14.828 ************************************ 00:07:14.828 END TEST accel_assign_opcode 00:07:14.828 ************************************ 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:14.828 00:33:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2862855 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2862855 ']' 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2862855 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2862855 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2862855' 00:07:14.828 killing process with pid 2862855 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@967 -- # kill 2862855 00:07:14.828 00:33:32 accel_rpc -- common/autotest_common.sh@972 -- # wait 2862855 00:07:15.087 00:07:15.087 real 0m1.783s 00:07:15.087 user 0m1.966s 00:07:15.087 sys 0m0.470s 00:07:15.087 00:33:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.087 00:33:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.087 ************************************ 00:07:15.087 END TEST accel_rpc 00:07:15.087 ************************************ 00:07:15.087 00:33:32 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.087 00:33:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.087 00:33:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.087 00:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.087 00:33:32 -- common/autotest_common.sh@10 -- # set +x 00:07:15.087 ************************************ 00:07:15.087 START TEST app_cmdline 00:07:15.087 ************************************ 00:07:15.087 00:33:32 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:15.347 * Looking for test storage... 00:07:15.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.347 00:33:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.347 00:33:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2863217 00:07:15.347 00:33:32 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.347 00:33:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2863217 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2863217 ']' 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.347 00:33:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.347 [2024-07-16 00:33:33.041460] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:15.347 [2024-07-16 00:33:33.041517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863217 ] 00:07:15.347 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.347 [2024-07-16 00:33:33.113668] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.606 [2024-07-16 00:33:33.208986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.606 00:33:33 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.606 00:33:33 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:15.606 00:33:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:15.864 { 00:07:15.864 "version": "SPDK v24.09-pre git sha1 e9e51ebfe", 00:07:15.864 "fields": { 00:07:15.864 "major": 24, 00:07:15.864 "minor": 9, 00:07:15.864 "patch": 0, 00:07:15.864 "suffix": "-pre", 00:07:15.864 "commit": "e9e51ebfe" 00:07:15.864 } 00:07:15.864 } 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:15.864 00:33:33 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.864 00:33:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:15.864 00:33:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:15.864 00:33:33 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.122 00:33:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:16.122 00:33:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:16.122 00:33:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.122 00:33:33 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:16.122 request: 00:07:16.122 { 00:07:16.122 "method": "env_dpdk_get_mem_stats", 00:07:16.122 "req_id": 1 00:07:16.122 } 00:07:16.122 Got JSON-RPC error response 00:07:16.122 response: 00:07:16.122 { 00:07:16.122 "code": -32601, 00:07:16.122 "message": "Method not found" 00:07:16.122 } 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.380 00:33:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2863217 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2863217 ']' 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2863217 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.380 00:33:33 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2863217 00:07:16.380 00:33:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.380 00:33:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.380 00:33:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2863217' 00:07:16.380 killing process with pid 2863217 00:07:16.380 00:33:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 2863217 00:07:16.380 00:33:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 2863217 00:07:16.638 00:07:16.638 real 0m1.455s 00:07:16.638 user 0m1.850s 00:07:16.638 sys 0m0.424s 00:07:16.638 00:33:34 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.638 00:33:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.638 ************************************ 00:07:16.638 END TEST app_cmdline 00:07:16.638 ************************************ 00:07:16.638 00:33:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:16.638 00:33:34 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:16.638 00:33:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.638 00:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.638 00:33:34 -- common/autotest_common.sh@10 -- # set +x 00:07:16.638 ************************************ 00:07:16.638 START TEST version 00:07:16.638 ************************************ 00:07:16.638 00:33:34 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:16.897 * Looking for test storage... 00:07:16.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:16.897 00:33:34 version -- app/version.sh@17 -- # get_header_version major 00:07:16.897 00:33:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # cut -f2 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.897 00:33:34 version -- app/version.sh@17 -- # major=24 00:07:16.897 00:33:34 version -- app/version.sh@18 -- # get_header_version minor 00:07:16.897 00:33:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # cut -f2 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.897 00:33:34 version -- app/version.sh@18 -- # minor=9 00:07:16.897 00:33:34 version -- app/version.sh@19 -- # get_header_version patch 00:07:16.897 00:33:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # cut -f2 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.897 00:33:34 version -- app/version.sh@19 -- # patch=0 00:07:16.897 00:33:34 version -- app/version.sh@20 -- # get_header_version suffix 00:07:16.897 00:33:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # cut -f2 00:07:16.897 00:33:34 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.897 00:33:34 version -- app/version.sh@20 -- # suffix=-pre 00:07:16.897 00:33:34 version -- app/version.sh@22 -- # version=24.9 00:07:16.897 00:33:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.897 00:33:34 version -- app/version.sh@28 -- # version=24.9rc0 00:07:16.897 00:33:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:16.897 00:33:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.897 00:33:34 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:16.897 00:33:34 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:16.897 00:07:16.897 real 0m0.165s 00:07:16.897 user 0m0.085s 00:07:16.897 sys 0m0.117s 00:07:16.897 00:33:34 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.897 00:33:34 version -- common/autotest_common.sh@10 -- # set +x 00:07:16.897 ************************************ 00:07:16.897 END TEST version 00:07:16.897 ************************************ 00:07:16.897 00:33:34 -- common/autotest_common.sh@1142 -- # return 0 00:07:16.898 00:33:34 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@198 -- # uname -s 00:07:16.898 00:33:34 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:16.898 00:33:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:16.898 00:33:34 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:16.898 00:33:34 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:16.898 00:33:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.898 00:33:34 -- common/autotest_common.sh@10 -- # set +x 00:07:16.898 00:33:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:16.898 00:33:34 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:16.898 00:33:34 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:16.898 00:33:34 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:16.898 00:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.898 00:33:34 -- common/autotest_common.sh@10 -- # set +x 00:07:16.898 ************************************ 00:07:16.898 START TEST nvmf_tcp 00:07:16.898 ************************************ 00:07:16.898 00:33:34 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:17.158 * Looking for test storage... 00:07:17.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.158 00:33:34 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.158 00:33:34 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.158 00:33:34 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.158 00:33:34 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.158 00:33:34 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.158 00:33:34 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.158 00:33:34 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:17.158 00:33:34 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:17.158 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:17.158 00:33:34 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.158 00:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.159 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:17.159 00:33:34 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.159 00:33:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.159 00:33:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.159 00:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.159 ************************************ 00:07:17.159 START TEST nvmf_example 00:07:17.159 ************************************ 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:17.159 * Looking for test storage... 00:07:17.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.159 00:33:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.786 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:23.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:23.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:23.787 Found net devices under 0000:af:00.0: cvl_0_0 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:23.787 Found net devices under 0000:af:00.1: cvl_0_1 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:07:23.787 00:07:23.787 --- 10.0.0.2 ping statistics --- 00:07:23.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.787 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:07:23.787 00:07:23.787 --- 10.0.0.1 ping statistics --- 00:07:23.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.787 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2867011 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2867011 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2867011 ']' 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.787 00:33:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.355 00:33:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:24.355 00:33:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:24.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.560 Initializing NVMe Controllers 00:07:36.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:36.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:36.560 Initialization complete. Launching workers. 00:07:36.560 ======================================================== 00:07:36.560 Latency(us) 00:07:36.560 Device Information : IOPS MiB/s Average min max 00:07:36.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10883.10 42.51 5881.84 1077.42 17227.80 00:07:36.560 ======================================================== 00:07:36.560 Total : 10883.10 42.51 5881.84 1077.42 17227.80 00:07:36.560 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.560 rmmod nvme_tcp 00:07:36.560 rmmod nvme_fabrics 00:07:36.560 rmmod nvme_keyring 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2867011 ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2867011 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2867011 ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2867011 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2867011 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2867011' 00:07:36.560 killing process with pid 2867011 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2867011 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2867011 00:07:36.560 nvmf threads initialize successfully 00:07:36.560 bdev subsystem init successfully 00:07:36.560 created a nvmf target service 00:07:36.560 create targets's poll groups done 00:07:36.560 all subsystems of target started 00:07:36.560 nvmf target is running 00:07:36.560 all subsystems of target stopped 00:07:36.560 destroy targets's poll groups done 00:07:36.560 destroyed the nvmf target service 00:07:36.560 bdev subsystem finish successfully 00:07:36.560 nvmf threads destroy successfully 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.560 00:33:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.127 00:07:37.127 real 0m19.917s 00:07:37.127 user 0m46.900s 00:07:37.127 sys 0m5.843s 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.127 00:33:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.127 ************************************ 00:07:37.127 END TEST nvmf_example 00:07:37.127 ************************************ 00:07:37.127 00:33:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:37.127 00:33:54 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.127 00:33:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:37.127 00:33:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.127 00:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.127 ************************************ 00:07:37.127 START TEST nvmf_filesystem 00:07:37.127 ************************************ 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:37.127 * Looking for test storage... 00:07:37.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:37.127 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.128 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:37.389 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:37.389 #define SPDK_CONFIG_H 00:07:37.389 #define SPDK_CONFIG_APPS 1 00:07:37.389 #define SPDK_CONFIG_ARCH native 00:07:37.389 #undef SPDK_CONFIG_ASAN 00:07:37.389 #undef SPDK_CONFIG_AVAHI 00:07:37.389 #undef SPDK_CONFIG_CET 00:07:37.389 #define SPDK_CONFIG_COVERAGE 1 00:07:37.389 #define SPDK_CONFIG_CROSS_PREFIX 00:07:37.389 #undef SPDK_CONFIG_CRYPTO 00:07:37.389 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:37.389 #undef SPDK_CONFIG_CUSTOMOCF 00:07:37.389 #undef SPDK_CONFIG_DAOS 00:07:37.389 #define SPDK_CONFIG_DAOS_DIR 00:07:37.389 #define SPDK_CONFIG_DEBUG 1 00:07:37.389 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:37.389 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:37.389 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:37.389 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:37.389 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:37.389 #undef SPDK_CONFIG_DPDK_UADK 00:07:37.389 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:37.389 #define SPDK_CONFIG_EXAMPLES 1 00:07:37.389 #undef SPDK_CONFIG_FC 00:07:37.389 #define SPDK_CONFIG_FC_PATH 00:07:37.389 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:37.389 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:37.389 #undef SPDK_CONFIG_FUSE 00:07:37.389 #undef SPDK_CONFIG_FUZZER 00:07:37.389 #define SPDK_CONFIG_FUZZER_LIB 00:07:37.389 #undef SPDK_CONFIG_GOLANG 00:07:37.389 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:37.389 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:37.389 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:37.389 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:37.389 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:37.389 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:37.389 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:37.389 #define SPDK_CONFIG_IDXD 1 00:07:37.389 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:37.389 #undef SPDK_CONFIG_IPSEC_MB 00:07:37.389 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:37.389 #define SPDK_CONFIG_ISAL 1 00:07:37.389 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:37.389 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:37.389 #define SPDK_CONFIG_LIBDIR 00:07:37.389 #undef SPDK_CONFIG_LTO 00:07:37.389 #define SPDK_CONFIG_MAX_LCORES 128 00:07:37.389 #define SPDK_CONFIG_NVME_CUSE 1 00:07:37.389 #undef SPDK_CONFIG_OCF 00:07:37.389 #define SPDK_CONFIG_OCF_PATH 00:07:37.389 #define SPDK_CONFIG_OPENSSL_PATH 00:07:37.389 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:37.389 #define SPDK_CONFIG_PGO_DIR 00:07:37.389 #undef SPDK_CONFIG_PGO_USE 00:07:37.389 #define SPDK_CONFIG_PREFIX /usr/local 00:07:37.389 #undef SPDK_CONFIG_RAID5F 00:07:37.389 #undef SPDK_CONFIG_RBD 00:07:37.389 #define SPDK_CONFIG_RDMA 1 00:07:37.389 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:37.389 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:37.389 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:37.389 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:37.390 #define SPDK_CONFIG_SHARED 1 00:07:37.390 #undef SPDK_CONFIG_SMA 00:07:37.390 #define SPDK_CONFIG_TESTS 1 00:07:37.390 #undef SPDK_CONFIG_TSAN 00:07:37.390 #define SPDK_CONFIG_UBLK 1 00:07:37.390 #define SPDK_CONFIG_UBSAN 1 00:07:37.390 #undef SPDK_CONFIG_UNIT_TESTS 00:07:37.390 #undef SPDK_CONFIG_URING 00:07:37.390 #define SPDK_CONFIG_URING_PATH 00:07:37.390 #undef SPDK_CONFIG_URING_ZNS 00:07:37.390 #undef SPDK_CONFIG_USDT 00:07:37.390 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:37.390 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:37.390 #define SPDK_CONFIG_VFIO_USER 1 00:07:37.390 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:37.390 #define SPDK_CONFIG_VHOST 1 00:07:37.390 #define SPDK_CONFIG_VIRTIO 1 00:07:37.390 #undef SPDK_CONFIG_VTUNE 00:07:37.390 #define SPDK_CONFIG_VTUNE_DIR 00:07:37.390 #define SPDK_CONFIG_WERROR 1 00:07:37.390 #define SPDK_CONFIG_WPDK_DIR 00:07:37.390 #undef SPDK_CONFIG_XNVME 00:07:37.390 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:37.390 00:33:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:37.390 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:37.391 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2869495 ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2869495 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5b0QvH 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5b0QvH/tests/target /tmp/spdk.5b0QvH 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954339328 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330090496 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=83791020032 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=94501482496 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10710462464 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47195103232 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250739200 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=18890862592 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=18900299776 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9437184 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47249776640 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47250743296 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=966656 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9450143744 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450147840 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:37.392 * Looking for test storage... 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=83791020032 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12925054976 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.392 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.393 00:33:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.959 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.959 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:43.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:43.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:43.960 Found net devices under 0000:af:00.0: cvl_0_0 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:43.960 Found net devices under 0000:af:00.1: cvl_0_1 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.960 00:34:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:07:43.960 00:07:43.960 --- 10.0.0.2 ping statistics --- 00:07:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.960 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:07:43.960 00:07:43.960 --- 10.0.0.1 ping statistics --- 00:07:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.960 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.960 ************************************ 00:07:43.960 START TEST nvmf_filesystem_no_in_capsule 00:07:43.960 ************************************ 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2872774 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2872774 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2872774 ']' 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.960 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.961 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.961 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.961 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.961 [2024-07-16 00:34:01.277497] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:43.961 [2024-07-16 00:34:01.277553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.961 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.961 [2024-07-16 00:34:01.367662] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.961 [2024-07-16 00:34:01.465379] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.961 [2024-07-16 00:34:01.465423] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.961 [2024-07-16 00:34:01.465433] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.961 [2024-07-16 00:34:01.465442] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.961 [2024-07-16 00:34:01.465450] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.961 [2024-07-16 00:34:01.465502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.961 [2024-07-16 00:34:01.465626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.961 [2024-07-16 00:34:01.465672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.961 [2024-07-16 00:34:01.465673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.219 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.219 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 [2024-07-16 00:34:01.849132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 Malloc1 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 [2024-07-16 00:34:02.010876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:44.220 { 00:07:44.220 "name": "Malloc1", 00:07:44.220 "aliases": [ 00:07:44.220 "b5dc9627-d047-402b-889c-f352f028db4c" 00:07:44.220 ], 00:07:44.220 "product_name": "Malloc disk", 00:07:44.220 "block_size": 512, 00:07:44.220 "num_blocks": 1048576, 00:07:44.220 "uuid": "b5dc9627-d047-402b-889c-f352f028db4c", 00:07:44.220 "assigned_rate_limits": { 00:07:44.220 "rw_ios_per_sec": 0, 00:07:44.220 "rw_mbytes_per_sec": 0, 00:07:44.220 "r_mbytes_per_sec": 0, 00:07:44.220 "w_mbytes_per_sec": 0 00:07:44.220 }, 00:07:44.220 "claimed": true, 00:07:44.220 "claim_type": "exclusive_write", 00:07:44.220 "zoned": false, 00:07:44.220 "supported_io_types": { 00:07:44.220 "read": true, 00:07:44.220 "write": true, 00:07:44.220 "unmap": true, 00:07:44.220 "flush": true, 00:07:44.220 "reset": true, 00:07:44.220 "nvme_admin": false, 00:07:44.220 "nvme_io": false, 00:07:44.220 "nvme_io_md": false, 00:07:44.220 "write_zeroes": true, 00:07:44.220 "zcopy": true, 00:07:44.220 "get_zone_info": false, 00:07:44.220 "zone_management": false, 00:07:44.220 "zone_append": false, 00:07:44.220 "compare": false, 00:07:44.220 "compare_and_write": false, 00:07:44.220 "abort": true, 00:07:44.220 "seek_hole": false, 00:07:44.220 "seek_data": false, 00:07:44.220 "copy": true, 00:07:44.220 "nvme_iov_md": false 00:07:44.220 }, 00:07:44.220 "memory_domains": [ 00:07:44.220 { 00:07:44.220 "dma_device_id": "system", 00:07:44.220 "dma_device_type": 1 00:07:44.220 }, 00:07:44.220 { 00:07:44.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.220 "dma_device_type": 2 00:07:44.220 } 00:07:44.220 ], 00:07:44.220 "driver_specific": {} 00:07:44.220 } 00:07:44.220 ]' 00:07:44.220 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.479 00:34:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.857 00:34:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.857 00:34:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.857 00:34:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.857 00:34:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:45.857 00:34:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.761 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:48.020 00:34:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.957 00:34:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.894 ************************************ 00:07:49.894 START TEST filesystem_ext4 00:07:49.894 ************************************ 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:49.894 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:49.895 00:34:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.895 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.895 Discarding device blocks: 0/522240 done 00:07:49.895 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.895 Filesystem UUID: ca95314b-3c2a-46c6-83d3-70949b0f9545 00:07:49.895 Superblock backups stored on blocks: 00:07:49.895 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.895 00:07:49.895 Allocating group tables: 0/64 done 00:07:49.895 Writing inode tables: 0/64 done 00:07:52.579 Creating journal (8192 blocks): done 00:07:52.579 Writing superblocks and filesystem accounting information: 0/64 done 00:07:52.579 00:07:52.579 00:34:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:52.579 00:34:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2872774 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.838 00:07:52.838 real 0m3.061s 00:07:52.838 user 0m0.041s 00:07:52.838 sys 0m0.052s 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.838 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:52.838 ************************************ 00:07:52.838 END TEST filesystem_ext4 00:07:52.838 ************************************ 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.097 ************************************ 00:07:53.097 START TEST filesystem_btrfs 00:07:53.097 ************************************ 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:53.097 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.098 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.098 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:53.098 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:53.098 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.098 00:34:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:53.357 btrfs-progs v6.6.2 00:07:53.357 See https://btrfs.readthedocs.io for more information. 00:07:53.357 00:07:53.357 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:53.357 NOTE: several default settings have changed in version 5.15, please make sure 00:07:53.357 this does not affect your deployments: 00:07:53.357 - DUP for metadata (-m dup) 00:07:53.357 - enabled no-holes (-O no-holes) 00:07:53.357 - enabled free-space-tree (-R free-space-tree) 00:07:53.357 00:07:53.357 Label: (null) 00:07:53.357 UUID: 4daab3ca-160f-49ad-bebc-7313d3e33da1 00:07:53.357 Node size: 16384 00:07:53.357 Sector size: 4096 00:07:53.357 Filesystem size: 510.00MiB 00:07:53.357 Block group profiles: 00:07:53.357 Data: single 8.00MiB 00:07:53.357 Metadata: DUP 32.00MiB 00:07:53.357 System: DUP 8.00MiB 00:07:53.357 SSD detected: yes 00:07:53.357 Zoned device: no 00:07:53.357 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:53.357 Runtime features: free-space-tree 00:07:53.357 Checksum: crc32c 00:07:53.357 Number of devices: 1 00:07:53.357 Devices: 00:07:53.357 ID SIZE PATH 00:07:53.357 1 510.00MiB /dev/nvme0n1p1 00:07:53.357 00:07:53.357 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.357 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2872774 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.616 00:07:53.616 real 0m0.621s 00:07:53.616 user 0m0.036s 00:07:53.616 sys 0m0.114s 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 ************************************ 00:07:53.616 END TEST filesystem_btrfs 00:07:53.616 ************************************ 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.616 ************************************ 00:07:53.616 START TEST filesystem_xfs 00:07:53.616 ************************************ 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.616 00:34:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:53.875 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:53.875 = sectsz=512 attr=2, projid32bit=1 00:07:53.875 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:53.875 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:53.875 data = bsize=4096 blocks=130560, imaxpct=25 00:07:53.875 = sunit=0 swidth=0 blks 00:07:53.875 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:53.875 log =internal log bsize=4096 blocks=16384, version=2 00:07:53.875 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:53.875 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:54.442 Discarding blocks...Done. 00:07:54.442 00:34:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:54.442 00:34:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2872774 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.973 00:07:56.973 real 0m3.373s 00:07:56.973 user 0m0.028s 00:07:56.973 sys 0m0.067s 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.973 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.973 ************************************ 00:07:56.973 END TEST filesystem_xfs 00:07:56.973 ************************************ 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.232 00:34:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2872774 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2872774 ']' 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2872774 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872774 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872774' 00:07:57.232 killing process with pid 2872774 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2872774 00:07:57.232 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2872774 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:57.799 00:07:57.799 real 0m14.209s 00:07:57.799 user 0m55.695s 00:07:57.799 sys 0m1.378s 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 ************************************ 00:07:57.799 END TEST nvmf_filesystem_no_in_capsule 00:07:57.799 ************************************ 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.799 00:34:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:57.799 ************************************ 00:07:57.800 START TEST nvmf_filesystem_in_capsule 00:07:57.800 ************************************ 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2875516 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2875516 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2875516 ']' 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.800 00:34:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.800 [2024-07-16 00:34:15.562315] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:07:57.800 [2024-07-16 00:34:15.562370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.800 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.058 [2024-07-16 00:34:15.653104] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.058 [2024-07-16 00:34:15.746605] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.058 [2024-07-16 00:34:15.746648] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.058 [2024-07-16 00:34:15.746659] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.058 [2024-07-16 00:34:15.746667] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.058 [2024-07-16 00:34:15.746675] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.058 [2024-07-16 00:34:15.746724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.058 [2024-07-16 00:34:15.746848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.058 [2024-07-16 00:34:15.746960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.058 [2024-07-16 00:34:15.746960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 [2024-07-16 00:34:16.553304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 Malloc1 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.993 [2024-07-16 00:34:16.711760] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:58.993 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:58.994 { 00:07:58.994 "name": "Malloc1", 00:07:58.994 "aliases": [ 00:07:58.994 "cbab730f-c666-4866-b401-c07d5bf4bbc5" 00:07:58.994 ], 00:07:58.994 "product_name": "Malloc disk", 00:07:58.994 "block_size": 512, 00:07:58.994 "num_blocks": 1048576, 00:07:58.994 "uuid": "cbab730f-c666-4866-b401-c07d5bf4bbc5", 00:07:58.994 "assigned_rate_limits": { 00:07:58.994 "rw_ios_per_sec": 0, 00:07:58.994 "rw_mbytes_per_sec": 0, 00:07:58.994 "r_mbytes_per_sec": 0, 00:07:58.994 "w_mbytes_per_sec": 0 00:07:58.994 }, 00:07:58.994 "claimed": true, 00:07:58.994 "claim_type": "exclusive_write", 00:07:58.994 "zoned": false, 00:07:58.994 "supported_io_types": { 00:07:58.994 "read": true, 00:07:58.994 "write": true, 00:07:58.994 "unmap": true, 00:07:58.994 "flush": true, 00:07:58.994 "reset": true, 00:07:58.994 "nvme_admin": false, 00:07:58.994 "nvme_io": false, 00:07:58.994 "nvme_io_md": false, 00:07:58.994 "write_zeroes": true, 00:07:58.994 "zcopy": true, 00:07:58.994 "get_zone_info": false, 00:07:58.994 "zone_management": false, 00:07:58.994 "zone_append": false, 00:07:58.994 "compare": false, 00:07:58.994 "compare_and_write": false, 00:07:58.994 "abort": true, 00:07:58.994 "seek_hole": false, 00:07:58.994 "seek_data": false, 00:07:58.994 "copy": true, 00:07:58.994 "nvme_iov_md": false 00:07:58.994 }, 00:07:58.994 "memory_domains": [ 00:07:58.994 { 00:07:58.994 "dma_device_id": "system", 00:07:58.994 "dma_device_type": 1 00:07:58.994 }, 00:07:58.994 { 00:07:58.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.994 "dma_device_type": 2 00:07:58.994 } 00:07:58.994 ], 00:07:58.994 "driver_specific": {} 00:07:58.994 } 00:07:58.994 ]' 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:58.994 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:59.252 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:59.252 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:59.252 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:59.252 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:59.252 00:34:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.627 00:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.627 00:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:00.627 00:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.627 00:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:00.627 00:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:02.530 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:03.097 00:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:04.473 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:04.473 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.474 ************************************ 00:08:04.474 START TEST filesystem_in_capsule_ext4 00:08:04.474 ************************************ 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:04.474 00:34:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:04.474 mke2fs 1.46.5 (30-Dec-2021) 00:08:04.474 Discarding device blocks: 0/522240 done 00:08:04.474 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:04.474 Filesystem UUID: 0106839e-097b-4bcf-8282-a6268b4ddd66 00:08:04.474 Superblock backups stored on blocks: 00:08:04.474 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:04.474 00:08:04.474 Allocating group tables: 0/64 done 00:08:04.474 Writing inode tables: 0/64 done 00:08:07.006 Creating journal (8192 blocks): done 00:08:08.091 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:08:08.091 00:08:08.091 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:08.091 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.350 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.350 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.350 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.350 00:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2875516 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.350 00:08:08.350 real 0m4.096s 00:08:08.350 user 0m0.031s 00:08:08.350 sys 0m0.062s 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 ************************************ 00:08:08.350 END TEST filesystem_in_capsule_ext4 00:08:08.350 ************************************ 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.350 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 ************************************ 00:08:08.350 START TEST filesystem_in_capsule_btrfs 00:08:08.350 ************************************ 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:08.351 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.610 btrfs-progs v6.6.2 00:08:08.610 See https://btrfs.readthedocs.io for more information. 00:08:08.610 00:08:08.610 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.610 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.610 this does not affect your deployments: 00:08:08.610 - DUP for metadata (-m dup) 00:08:08.610 - enabled no-holes (-O no-holes) 00:08:08.610 - enabled free-space-tree (-R free-space-tree) 00:08:08.610 00:08:08.610 Label: (null) 00:08:08.610 UUID: 23fbea5b-30c8-4f5e-80e8-f01f9f304374 00:08:08.610 Node size: 16384 00:08:08.610 Sector size: 4096 00:08:08.610 Filesystem size: 510.00MiB 00:08:08.610 Block group profiles: 00:08:08.610 Data: single 8.00MiB 00:08:08.610 Metadata: DUP 32.00MiB 00:08:08.610 System: DUP 8.00MiB 00:08:08.610 SSD detected: yes 00:08:08.610 Zoned device: no 00:08:08.610 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.610 Runtime features: free-space-tree 00:08:08.610 Checksum: crc32c 00:08:08.610 Number of devices: 1 00:08:08.610 Devices: 00:08:08.610 ID SIZE PATH 00:08:08.610 1 510.00MiB /dev/nvme0n1p1 00:08:08.610 00:08:08.610 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:08.610 00:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2875516 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.548 00:08:09.548 real 0m1.022s 00:08:09.548 user 0m0.028s 00:08:09.548 sys 0m0.127s 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.548 ************************************ 00:08:09.548 END TEST filesystem_in_capsule_btrfs 00:08:09.548 ************************************ 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.548 ************************************ 00:08:09.548 START TEST filesystem_in_capsule_xfs 00:08:09.548 ************************************ 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.548 00:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.548 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.548 = sectsz=512 attr=2, projid32bit=1 00:08:09.548 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.548 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.548 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.548 = sunit=0 swidth=0 blks 00:08:09.548 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.548 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.548 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.548 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.485 Discarding blocks...Done. 00:08:10.485 00:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:10.485 00:34:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2875516 00:08:13.018 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.019 00:08:13.019 real 0m3.540s 00:08:13.019 user 0m0.021s 00:08:13.019 sys 0m0.074s 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.019 ************************************ 00:08:13.019 END TEST filesystem_in_capsule_xfs 00:08:13.019 ************************************ 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.019 00:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.277 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.278 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2875516 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2875516 ']' 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2875516 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875516 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875516' 00:08:13.537 killing process with pid 2875516 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2875516 00:08:13.537 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2875516 00:08:14.105 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.106 00:08:14.106 real 0m16.167s 00:08:14.106 user 1m3.402s 00:08:14.106 sys 0m1.417s 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.106 ************************************ 00:08:14.106 END TEST nvmf_filesystem_in_capsule 00:08:14.106 ************************************ 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.106 rmmod nvme_tcp 00:08:14.106 rmmod nvme_fabrics 00:08:14.106 rmmod nvme_keyring 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.106 00:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.011 00:34:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.011 00:08:16.011 real 0m38.990s 00:08:16.011 user 2m0.977s 00:08:16.011 sys 0m7.524s 00:08:16.011 00:34:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.011 00:34:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.011 ************************************ 00:08:16.011 END TEST nvmf_filesystem 00:08:16.011 ************************************ 00:08:16.270 00:34:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.270 00:34:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.270 00:34:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.270 00:34:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.270 00:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.270 ************************************ 00:08:16.270 START TEST nvmf_target_discovery 00:08:16.270 ************************************ 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.270 * Looking for test storage... 00:08:16.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.270 00:34:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.270 00:34:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.271 00:34:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.840 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:22.841 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:22.841 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:22.841 Found net devices under 0000:af:00.0: cvl_0_0 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:22.841 Found net devices under 0000:af:00.1: cvl_0_1 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:08:22.841 00:08:22.841 --- 10.0.0.2 ping statistics --- 00:08:22.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.841 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:08:22.841 00:08:22.841 --- 10.0.0.1 ping statistics --- 00:08:22.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.841 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2882344 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2882344 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2882344 ']' 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.841 00:34:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 [2024-07-16 00:34:40.003625] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:22.841 [2024-07-16 00:34:40.003678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.841 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.841 [2024-07-16 00:34:40.094006] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.841 [2024-07-16 00:34:40.188132] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.841 [2024-07-16 00:34:40.188174] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.841 [2024-07-16 00:34:40.188184] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.841 [2024-07-16 00:34:40.188193] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.841 [2024-07-16 00:34:40.188201] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.841 [2024-07-16 00:34:40.188265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.841 [2024-07-16 00:34:40.188315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.841 [2024-07-16 00:34:40.188405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.841 [2024-07-16 00:34:40.188404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 [2024-07-16 00:34:40.537324] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 Null1 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.841 [2024-07-16 00:34:40.589695] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.841 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 Null2 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 Null3 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 Null4 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.842 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.100 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:23.100 00:08:23.100 Discovery Log Number of Records 6, Generation counter 6 00:08:23.100 =====Discovery Log Entry 0====== 00:08:23.100 trtype: tcp 00:08:23.100 adrfam: ipv4 00:08:23.100 subtype: current discovery subsystem 00:08:23.100 treq: not required 00:08:23.100 portid: 0 00:08:23.100 trsvcid: 4420 00:08:23.100 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:23.100 traddr: 10.0.0.2 00:08:23.100 eflags: explicit discovery connections, duplicate discovery information 00:08:23.100 sectype: none 00:08:23.100 =====Discovery Log Entry 1====== 00:08:23.100 trtype: tcp 00:08:23.100 adrfam: ipv4 00:08:23.100 subtype: nvme subsystem 00:08:23.100 treq: not required 00:08:23.100 portid: 0 00:08:23.100 trsvcid: 4420 00:08:23.100 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:23.100 traddr: 10.0.0.2 00:08:23.100 eflags: none 00:08:23.100 sectype: none 00:08:23.100 =====Discovery Log Entry 2====== 00:08:23.100 trtype: tcp 00:08:23.100 adrfam: ipv4 00:08:23.100 subtype: nvme subsystem 00:08:23.100 treq: not required 00:08:23.100 portid: 0 00:08:23.100 trsvcid: 4420 00:08:23.100 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:23.100 traddr: 10.0.0.2 00:08:23.100 eflags: none 00:08:23.100 sectype: none 00:08:23.100 =====Discovery Log Entry 3====== 00:08:23.100 trtype: tcp 00:08:23.100 adrfam: ipv4 00:08:23.100 subtype: nvme subsystem 00:08:23.100 treq: not required 00:08:23.100 portid: 0 00:08:23.100 trsvcid: 4420 00:08:23.100 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:23.100 traddr: 10.0.0.2 00:08:23.101 eflags: none 00:08:23.101 sectype: none 00:08:23.101 =====Discovery Log Entry 4====== 00:08:23.101 trtype: tcp 00:08:23.101 adrfam: ipv4 00:08:23.101 subtype: nvme subsystem 00:08:23.101 treq: not required 00:08:23.101 portid: 0 00:08:23.101 trsvcid: 4420 00:08:23.101 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:23.101 traddr: 10.0.0.2 00:08:23.101 eflags: none 00:08:23.101 sectype: none 00:08:23.101 =====Discovery Log Entry 5====== 00:08:23.101 trtype: tcp 00:08:23.101 adrfam: ipv4 00:08:23.101 subtype: discovery subsystem referral 00:08:23.101 treq: not required 00:08:23.101 portid: 0 00:08:23.101 trsvcid: 4430 00:08:23.101 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:23.101 traddr: 10.0.0.2 00:08:23.101 eflags: none 00:08:23.101 sectype: none 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:23.101 Perform nvmf subsystem discovery via RPC 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.101 [ 00:08:23.101 { 00:08:23.101 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:23.101 "subtype": "Discovery", 00:08:23.101 "listen_addresses": [ 00:08:23.101 { 00:08:23.101 "trtype": "TCP", 00:08:23.101 "adrfam": "IPv4", 00:08:23.101 "traddr": "10.0.0.2", 00:08:23.101 "trsvcid": "4420" 00:08:23.101 } 00:08:23.101 ], 00:08:23.101 "allow_any_host": true, 00:08:23.101 "hosts": [] 00:08:23.101 }, 00:08:23.101 { 00:08:23.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:23.101 "subtype": "NVMe", 00:08:23.101 "listen_addresses": [ 00:08:23.101 { 00:08:23.101 "trtype": "TCP", 00:08:23.101 "adrfam": "IPv4", 00:08:23.101 "traddr": "10.0.0.2", 00:08:23.101 "trsvcid": "4420" 00:08:23.101 } 00:08:23.101 ], 00:08:23.101 "allow_any_host": true, 00:08:23.101 "hosts": [], 00:08:23.101 "serial_number": "SPDK00000000000001", 00:08:23.101 "model_number": "SPDK bdev Controller", 00:08:23.101 "max_namespaces": 32, 00:08:23.101 "min_cntlid": 1, 00:08:23.101 "max_cntlid": 65519, 00:08:23.101 "namespaces": [ 00:08:23.101 { 00:08:23.101 "nsid": 1, 00:08:23.101 "bdev_name": "Null1", 00:08:23.101 "name": "Null1", 00:08:23.101 "nguid": "B626B5F97E564CDCA92AE0042C940C51", 00:08:23.101 "uuid": "b626b5f9-7e56-4cdc-a92a-e0042c940c51" 00:08:23.101 } 00:08:23.101 ] 00:08:23.101 }, 00:08:23.101 { 00:08:23.101 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:23.101 "subtype": "NVMe", 00:08:23.101 "listen_addresses": [ 00:08:23.101 { 00:08:23.101 "trtype": "TCP", 00:08:23.101 "adrfam": "IPv4", 00:08:23.101 "traddr": "10.0.0.2", 00:08:23.101 "trsvcid": "4420" 00:08:23.101 } 00:08:23.101 ], 00:08:23.101 "allow_any_host": true, 00:08:23.101 "hosts": [], 00:08:23.101 "serial_number": "SPDK00000000000002", 00:08:23.101 "model_number": "SPDK bdev Controller", 00:08:23.101 "max_namespaces": 32, 00:08:23.101 "min_cntlid": 1, 00:08:23.101 "max_cntlid": 65519, 00:08:23.101 "namespaces": [ 00:08:23.101 { 00:08:23.101 "nsid": 1, 00:08:23.101 "bdev_name": "Null2", 00:08:23.101 "name": "Null2", 00:08:23.101 "nguid": "4988C3958566400F9958DC3BA7DD375B", 00:08:23.101 "uuid": "4988c395-8566-400f-9958-dc3ba7dd375b" 00:08:23.101 } 00:08:23.101 ] 00:08:23.101 }, 00:08:23.101 { 00:08:23.101 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:23.101 "subtype": "NVMe", 00:08:23.101 "listen_addresses": [ 00:08:23.101 { 00:08:23.101 "trtype": "TCP", 00:08:23.101 "adrfam": "IPv4", 00:08:23.101 "traddr": "10.0.0.2", 00:08:23.101 "trsvcid": "4420" 00:08:23.101 } 00:08:23.101 ], 00:08:23.101 "allow_any_host": true, 00:08:23.101 "hosts": [], 00:08:23.101 "serial_number": "SPDK00000000000003", 00:08:23.101 "model_number": "SPDK bdev Controller", 00:08:23.101 "max_namespaces": 32, 00:08:23.101 "min_cntlid": 1, 00:08:23.101 "max_cntlid": 65519, 00:08:23.101 "namespaces": [ 00:08:23.101 { 00:08:23.101 "nsid": 1, 00:08:23.101 "bdev_name": "Null3", 00:08:23.101 "name": "Null3", 00:08:23.101 "nguid": "F24EB7AEC61E4293AB63522CF55D3B6E", 00:08:23.101 "uuid": "f24eb7ae-c61e-4293-ab63-522cf55d3b6e" 00:08:23.101 } 00:08:23.101 ] 00:08:23.101 }, 00:08:23.101 { 00:08:23.101 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:23.101 "subtype": "NVMe", 00:08:23.101 "listen_addresses": [ 00:08:23.101 { 00:08:23.101 "trtype": "TCP", 00:08:23.101 "adrfam": "IPv4", 00:08:23.101 "traddr": "10.0.0.2", 00:08:23.101 "trsvcid": "4420" 00:08:23.101 } 00:08:23.101 ], 00:08:23.101 "allow_any_host": true, 00:08:23.101 "hosts": [], 00:08:23.101 "serial_number": "SPDK00000000000004", 00:08:23.101 "model_number": "SPDK bdev Controller", 00:08:23.101 "max_namespaces": 32, 00:08:23.101 "min_cntlid": 1, 00:08:23.101 "max_cntlid": 65519, 00:08:23.101 "namespaces": [ 00:08:23.101 { 00:08:23.101 "nsid": 1, 00:08:23.101 "bdev_name": "Null4", 00:08:23.101 "name": "Null4", 00:08:23.101 "nguid": "8408DAC047334C51BEE2A3EBAC97B21D", 00:08:23.101 "uuid": "8408dac0-4733-4c51-bee2-a3ebac97b21d" 00:08:23.101 } 00:08:23.101 ] 00:08:23.101 } 00:08:23.101 ] 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:23.101 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.102 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.359 00:34:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.359 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:23.359 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.360 00:34:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.360 rmmod nvme_tcp 00:08:23.360 rmmod nvme_fabrics 00:08:23.360 rmmod nvme_keyring 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2882344 ']' 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2882344 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2882344 ']' 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2882344 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882344 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882344' 00:08:23.360 killing process with pid 2882344 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2882344 00:08:23.360 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2882344 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.617 00:34:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.517 00:34:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.517 00:08:25.517 real 0m9.444s 00:08:25.517 user 0m6.321s 00:08:25.517 sys 0m4.841s 00:08:25.517 00:34:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.517 00:34:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.517 ************************************ 00:08:25.517 END TEST nvmf_target_discovery 00:08:25.517 ************************************ 00:08:25.775 00:34:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:25.775 00:34:43 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:25.775 00:34:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.775 00:34:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.775 00:34:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.775 ************************************ 00:08:25.775 START TEST nvmf_referrals 00:08:25.775 ************************************ 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:25.775 * Looking for test storage... 00:08:25.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.775 00:34:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.776 00:34:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:32.344 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:32.344 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:32.344 Found net devices under 0000:af:00.0: cvl_0_0 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:32.344 Found net devices under 0000:af:00.1: cvl_0_1 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.344 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:08:32.345 00:08:32.345 --- 10.0.0.2 ping statistics --- 00:08:32.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.345 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:08:32.345 00:08:32.345 --- 10.0.0.1 ping statistics --- 00:08:32.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.345 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2886110 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2886110 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2886110 ']' 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.345 00:34:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.345 [2024-07-16 00:34:49.633559] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:32.345 [2024-07-16 00:34:49.633616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.345 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.345 [2024-07-16 00:34:49.723961] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.345 [2024-07-16 00:34:49.813695] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.345 [2024-07-16 00:34:49.813743] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.345 [2024-07-16 00:34:49.813754] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.345 [2024-07-16 00:34:49.813762] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.345 [2024-07-16 00:34:49.813770] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.345 [2024-07-16 00:34:49.813825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.345 [2024-07-16 00:34:49.813937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.345 [2024-07-16 00:34:49.814025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.345 [2024-07-16 00:34:49.814025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 [2024-07-16 00:34:50.627605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 [2024-07-16 00:34:50.647824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.913 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.914 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.173 00:34:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.173 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.173 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.173 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.173 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:33.433 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:33.692 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.692 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.693 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.952 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.211 00:34:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.211 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.211 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.211 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.211 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.211 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.470 rmmod nvme_tcp 00:08:34.470 rmmod nvme_fabrics 00:08:34.470 rmmod nvme_keyring 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2886110 ']' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2886110 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2886110 ']' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2886110 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2886110 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2886110' 00:08:34.470 killing process with pid 2886110 00:08:34.470 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2886110 00:08:34.471 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2886110 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.730 00:34:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.266 00:34:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.267 00:08:37.267 real 0m11.151s 00:08:37.267 user 0m13.354s 00:08:37.267 sys 0m5.289s 00:08:37.267 00:34:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.267 00:34:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 ************************************ 00:08:37.267 END TEST nvmf_referrals 00:08:37.267 ************************************ 00:08:37.267 00:34:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.267 00:34:54 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.267 00:34:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.267 00:34:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.267 00:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 ************************************ 00:08:37.267 START TEST nvmf_connect_disconnect 00:08:37.267 ************************************ 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.267 * Looking for test storage... 00:08:37.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.267 00:34:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:42.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:42.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:42.540 Found net devices under 0000:af:00.0: cvl_0_0 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:42.540 Found net devices under 0000:af:00.1: cvl_0_1 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.540 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.541 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:08:42.799 00:08:42.799 --- 10.0.0.2 ping statistics --- 00:08:42.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.799 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:42.799 00:08:42.799 --- 10.0.0.1 ping statistics --- 00:08:42.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.799 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2890453 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2890453 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2890453 ']' 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.799 00:35:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.799 [2024-07-16 00:35:00.542292] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:08:42.799 [2024-07-16 00:35:00.542347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.799 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.799 [2024-07-16 00:35:00.632369] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.058 [2024-07-16 00:35:00.724369] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.058 [2024-07-16 00:35:00.724409] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.058 [2024-07-16 00:35:00.724420] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.058 [2024-07-16 00:35:00.724429] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.058 [2024-07-16 00:35:00.724437] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.058 [2024-07-16 00:35:00.724480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.058 [2024-07-16 00:35:00.724594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.058 [2024-07-16 00:35:00.724703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.058 [2024-07-16 00:35:00.724704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 [2024-07-16 00:35:01.539156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:43.992 [2024-07-16 00:35:01.599191] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:43.992 00:35:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:47.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.494 rmmod nvme_tcp 00:09:01.494 rmmod nvme_fabrics 00:09:01.494 rmmod nvme_keyring 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2890453 ']' 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2890453 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2890453 ']' 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2890453 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2890453 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2890453' 00:09:01.494 killing process with pid 2890453 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2890453 00:09:01.494 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2890453 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.752 00:35:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.283 00:35:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.283 00:09:04.283 real 0m26.965s 00:09:04.283 user 1m16.234s 00:09:04.283 sys 0m5.747s 00:09:04.283 00:35:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.283 00:35:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:04.283 ************************************ 00:09:04.283 END TEST nvmf_connect_disconnect 00:09:04.283 ************************************ 00:09:04.283 00:35:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.283 00:35:21 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:04.283 00:35:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.283 00:35:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.283 00:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.283 ************************************ 00:09:04.283 START TEST nvmf_multitarget 00:09:04.283 ************************************ 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:04.283 * Looking for test storage... 00:09:04.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.283 00:35:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.284 00:35:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:10.847 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:10.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:10.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:10.848 Found net devices under 0000:af:00.0: cvl_0_0 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:10.848 Found net devices under 0000:af:00.1: cvl_0_1 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:09:10.848 00:09:10.848 --- 10.0.0.2 ping statistics --- 00:09:10.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.848 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:09:10.848 00:09:10.848 --- 10.0.0.1 ping statistics --- 00:09:10.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.848 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2897463 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2897463 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2897463 ']' 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.848 00:35:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:10.848 [2024-07-16 00:35:27.792765] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:10.848 [2024-07-16 00:35:27.792824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.848 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.848 [2024-07-16 00:35:27.882154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.848 [2024-07-16 00:35:27.974032] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.848 [2024-07-16 00:35:27.974071] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.848 [2024-07-16 00:35:27.974081] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.848 [2024-07-16 00:35:27.974090] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.848 [2024-07-16 00:35:27.974098] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.848 [2024-07-16 00:35:27.974161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.848 [2024-07-16 00:35:27.974620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.848 [2024-07-16 00:35:27.974644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.848 [2024-07-16 00:35:27.974647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:11.107 00:35:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:11.366 "nvmf_tgt_1" 00:09:11.366 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:11.366 "nvmf_tgt_2" 00:09:11.366 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.366 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:11.624 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:11.624 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:11.624 true 00:09:11.624 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:11.883 true 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.883 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.883 rmmod nvme_tcp 00:09:11.883 rmmod nvme_fabrics 00:09:12.142 rmmod nvme_keyring 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2897463 ']' 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2897463 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2897463 ']' 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2897463 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2897463 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2897463' 00:09:12.142 killing process with pid 2897463 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2897463 00:09:12.142 00:35:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2897463 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.402 00:35:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.308 00:35:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.308 00:09:14.308 real 0m10.383s 00:09:14.308 user 0m10.770s 00:09:14.308 sys 0m4.955s 00:09:14.308 00:35:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.308 00:35:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:14.308 ************************************ 00:09:14.308 END TEST nvmf_multitarget 00:09:14.308 ************************************ 00:09:14.308 00:35:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.308 00:35:32 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:14.308 00:35:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.308 00:35:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.308 00:35:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.567 ************************************ 00:09:14.567 START TEST nvmf_rpc 00:09:14.567 ************************************ 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:14.567 * Looking for test storage... 00:09:14.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.567 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.568 00:35:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:21.138 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:21.138 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.138 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:21.139 Found net devices under 0000:af:00.0: cvl_0_0 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:21.139 Found net devices under 0000:af:00.1: cvl_0_1 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.139 00:35:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:09:21.139 00:09:21.139 --- 10.0.0.2 ping statistics --- 00:09:21.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.139 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:09:21.139 00:09:21.139 --- 10.0.0.1 ping statistics --- 00:09:21.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.139 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2901528 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2901528 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2901528 ']' 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.139 00:35:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.139 [2024-07-16 00:35:38.265382] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:21.139 [2024-07-16 00:35:38.265441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.139 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.139 [2024-07-16 00:35:38.354659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.139 [2024-07-16 00:35:38.444175] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.139 [2024-07-16 00:35:38.444223] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.139 [2024-07-16 00:35:38.444233] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.139 [2024-07-16 00:35:38.444242] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.139 [2024-07-16 00:35:38.444250] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.139 [2024-07-16 00:35:38.444315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.139 [2024-07-16 00:35:38.444427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.139 [2024-07-16 00:35:38.444538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.139 [2024-07-16 00:35:38.444538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.399 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.399 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:21.399 00:35:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.399 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.399 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.658 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:21.658 "tick_rate": 2200000000, 00:09:21.658 "poll_groups": [ 00:09:21.658 { 00:09:21.658 "name": "nvmf_tgt_poll_group_000", 00:09:21.658 "admin_qpairs": 0, 00:09:21.658 "io_qpairs": 0, 00:09:21.658 "current_admin_qpairs": 0, 00:09:21.658 "current_io_qpairs": 0, 00:09:21.658 "pending_bdev_io": 0, 00:09:21.658 "completed_nvme_io": 0, 00:09:21.658 "transports": [] 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "name": "nvmf_tgt_poll_group_001", 00:09:21.658 "admin_qpairs": 0, 00:09:21.658 "io_qpairs": 0, 00:09:21.658 "current_admin_qpairs": 0, 00:09:21.658 "current_io_qpairs": 0, 00:09:21.658 "pending_bdev_io": 0, 00:09:21.658 "completed_nvme_io": 0, 00:09:21.658 "transports": [] 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "name": "nvmf_tgt_poll_group_002", 00:09:21.658 "admin_qpairs": 0, 00:09:21.658 "io_qpairs": 0, 00:09:21.658 "current_admin_qpairs": 0, 00:09:21.658 "current_io_qpairs": 0, 00:09:21.658 "pending_bdev_io": 0, 00:09:21.658 "completed_nvme_io": 0, 00:09:21.658 "transports": [] 00:09:21.658 }, 00:09:21.658 { 00:09:21.658 "name": "nvmf_tgt_poll_group_003", 00:09:21.658 "admin_qpairs": 0, 00:09:21.658 "io_qpairs": 0, 00:09:21.658 "current_admin_qpairs": 0, 00:09:21.658 "current_io_qpairs": 0, 00:09:21.658 "pending_bdev_io": 0, 00:09:21.658 "completed_nvme_io": 0, 00:09:21.658 "transports": [] 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 }' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 [2024-07-16 00:35:39.375921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:21.659 "tick_rate": 2200000000, 00:09:21.659 "poll_groups": [ 00:09:21.659 { 00:09:21.659 "name": "nvmf_tgt_poll_group_000", 00:09:21.659 "admin_qpairs": 0, 00:09:21.659 "io_qpairs": 0, 00:09:21.659 "current_admin_qpairs": 0, 00:09:21.659 "current_io_qpairs": 0, 00:09:21.659 "pending_bdev_io": 0, 00:09:21.659 "completed_nvme_io": 0, 00:09:21.659 "transports": [ 00:09:21.659 { 00:09:21.659 "trtype": "TCP" 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 }, 00:09:21.659 { 00:09:21.659 "name": "nvmf_tgt_poll_group_001", 00:09:21.659 "admin_qpairs": 0, 00:09:21.659 "io_qpairs": 0, 00:09:21.659 "current_admin_qpairs": 0, 00:09:21.659 "current_io_qpairs": 0, 00:09:21.659 "pending_bdev_io": 0, 00:09:21.659 "completed_nvme_io": 0, 00:09:21.659 "transports": [ 00:09:21.659 { 00:09:21.659 "trtype": "TCP" 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 }, 00:09:21.659 { 00:09:21.659 "name": "nvmf_tgt_poll_group_002", 00:09:21.659 "admin_qpairs": 0, 00:09:21.659 "io_qpairs": 0, 00:09:21.659 "current_admin_qpairs": 0, 00:09:21.659 "current_io_qpairs": 0, 00:09:21.659 "pending_bdev_io": 0, 00:09:21.659 "completed_nvme_io": 0, 00:09:21.659 "transports": [ 00:09:21.659 { 00:09:21.659 "trtype": "TCP" 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 }, 00:09:21.659 { 00:09:21.659 "name": "nvmf_tgt_poll_group_003", 00:09:21.659 "admin_qpairs": 0, 00:09:21.659 "io_qpairs": 0, 00:09:21.659 "current_admin_qpairs": 0, 00:09:21.659 "current_io_qpairs": 0, 00:09:21.659 "pending_bdev_io": 0, 00:09:21.659 "completed_nvme_io": 0, 00:09:21.659 "transports": [ 00:09:21.659 { 00:09:21.659 "trtype": "TCP" 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 } 00:09:21.659 ] 00:09:21.659 }' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:21.659 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 Malloc1 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 [2024-07-16 00:35:39.564573] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:21.919 [2024-07-16 00:35:39.589091] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:09:21.919 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:21.919 could not add new controller: failed to write to nvme-fabrics device 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.919 00:35:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.298 00:35:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.298 00:35:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.298 00:35:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.298 00:35:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.298 00:35:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.204 00:35:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:25.204 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.462 [2024-07-16 00:35:43.197476] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562' 00:09:25.462 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:25.462 could not add new controller: failed to write to nvme-fabrics device 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.462 00:35:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:26.833 00:35:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.833 00:35:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.833 00:35:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.833 00:35:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.833 00:35:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:28.737 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 [2024-07-16 00:35:46.710366] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.996 00:35:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.373 00:35:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.373 00:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.373 00:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.373 00:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:30.373 00:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 [2024-07-16 00:35:50.165388] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.341 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.600 00:35:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.600 00:35:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.975 00:35:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.975 00:35:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.975 00:35:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.975 00:35:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.975 00:35:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 [2024-07-16 00:35:53.649977] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.879 00:35:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.255 00:35:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.255 00:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.255 00:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.255 00:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.255 00:35:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:39.155 00:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 [2024-07-16 00:35:57.236394] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.414 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.414 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.414 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.672 00:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.673 00:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.050 00:35:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.050 00:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.050 00:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.050 00:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:41.050 00:35:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 [2024-07-16 00:36:00.722567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.952 00:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.325 00:36:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.325 00:36:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.325 00:36:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.325 00:36:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.325 00:36:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 [2024-07-16 00:36:04.275784] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 [2024-07-16 00:36:04.323923] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 [2024-07-16 00:36:04.376096] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 [2024-07-16 00:36:04.424279] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 [2024-07-16 00:36:04.472483] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.854 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:46.854 "tick_rate": 2200000000, 00:09:46.854 "poll_groups": [ 00:09:46.854 { 00:09:46.854 "name": "nvmf_tgt_poll_group_000", 00:09:46.854 "admin_qpairs": 2, 00:09:46.854 "io_qpairs": 196, 00:09:46.854 "current_admin_qpairs": 0, 00:09:46.854 "current_io_qpairs": 0, 00:09:46.854 "pending_bdev_io": 0, 00:09:46.854 "completed_nvme_io": 295, 00:09:46.854 "transports": [ 00:09:46.854 { 00:09:46.854 "trtype": "TCP" 00:09:46.854 } 00:09:46.854 ] 00:09:46.854 }, 00:09:46.854 { 00:09:46.854 "name": "nvmf_tgt_poll_group_001", 00:09:46.854 "admin_qpairs": 2, 00:09:46.854 "io_qpairs": 196, 00:09:46.854 "current_admin_qpairs": 0, 00:09:46.854 "current_io_qpairs": 0, 00:09:46.854 "pending_bdev_io": 0, 00:09:46.854 "completed_nvme_io": 297, 00:09:46.854 "transports": [ 00:09:46.854 { 00:09:46.854 "trtype": "TCP" 00:09:46.854 } 00:09:46.854 ] 00:09:46.854 }, 00:09:46.854 { 00:09:46.854 "name": "nvmf_tgt_poll_group_002", 00:09:46.854 "admin_qpairs": 1, 00:09:46.854 "io_qpairs": 196, 00:09:46.854 "current_admin_qpairs": 0, 00:09:46.854 "current_io_qpairs": 0, 00:09:46.854 "pending_bdev_io": 0, 00:09:46.854 "completed_nvme_io": 246, 00:09:46.854 "transports": [ 00:09:46.854 { 00:09:46.854 "trtype": "TCP" 00:09:46.854 } 00:09:46.854 ] 00:09:46.854 }, 00:09:46.854 { 00:09:46.854 "name": "nvmf_tgt_poll_group_003", 00:09:46.854 "admin_qpairs": 2, 00:09:46.854 "io_qpairs": 196, 00:09:46.854 "current_admin_qpairs": 0, 00:09:46.854 "current_io_qpairs": 0, 00:09:46.854 "pending_bdev_io": 0, 00:09:46.854 "completed_nvme_io": 296, 00:09:46.854 "transports": [ 00:09:46.854 { 00:09:46.854 "trtype": "TCP" 00:09:46.854 } 00:09:46.855 ] 00:09:46.855 } 00:09:46.855 ] 00:09:46.855 }' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.855 rmmod nvme_tcp 00:09:46.855 rmmod nvme_fabrics 00:09:46.855 rmmod nvme_keyring 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2901528 ']' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2901528 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2901528 ']' 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2901528 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:46.855 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2901528 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2901528' 00:09:47.113 killing process with pid 2901528 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2901528 00:09:47.113 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2901528 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.372 00:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.276 00:36:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.276 00:09:49.276 real 0m34.889s 00:09:49.276 user 1m47.109s 00:09:49.276 sys 0m6.634s 00:09:49.276 00:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.276 00:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.276 ************************************ 00:09:49.276 END TEST nvmf_rpc 00:09:49.276 ************************************ 00:09:49.276 00:36:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.276 00:36:07 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:49.276 00:36:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.276 00:36:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.276 00:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.276 ************************************ 00:09:49.276 START TEST nvmf_invalid 00:09:49.276 ************************************ 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:49.535 * Looking for test storage... 00:09:49.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.535 00:36:07 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.536 00:36:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.104 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.104 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.104 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.104 00:36:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:09:56.104 00:09:56.104 --- 10.0.0.2 ping statistics --- 00:09:56.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.104 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:09:56.104 00:09:56.104 --- 10.0.0.1 ping statistics --- 00:09:56.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.104 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.104 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2910606 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2910606 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2910606 ']' 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.105 00:36:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:56.105 [2024-07-16 00:36:13.213080] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:09:56.105 [2024-07-16 00:36:13.213137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.105 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.105 [2024-07-16 00:36:13.299437] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.105 [2024-07-16 00:36:13.391636] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.105 [2024-07-16 00:36:13.391678] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.105 [2024-07-16 00:36:13.391688] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.105 [2024-07-16 00:36:13.391697] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.105 [2024-07-16 00:36:13.391704] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.105 [2024-07-16 00:36:13.391755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.105 [2024-07-16 00:36:13.391868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.105 [2024-07-16 00:36:13.391981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.105 [2024-07-16 00:36:13.391982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.362 00:36:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.363 00:36:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:56.363 00:36:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.363 00:36:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.363 00:36:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:56.621 00:36:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.621 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:56.621 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10926 00:09:56.621 [2024-07-16 00:36:14.432349] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:56.880 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:56.880 { 00:09:56.880 "nqn": "nqn.2016-06.io.spdk:cnode10926", 00:09:56.880 "tgt_name": "foobar", 00:09:56.880 "method": "nvmf_create_subsystem", 00:09:56.880 "req_id": 1 00:09:56.880 } 00:09:56.880 Got JSON-RPC error response 00:09:56.880 response: 00:09:56.880 { 00:09:56.880 "code": -32603, 00:09:56.880 "message": "Unable to find target foobar" 00:09:56.880 }' 00:09:56.880 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:56.880 { 00:09:56.880 "nqn": "nqn.2016-06.io.spdk:cnode10926", 00:09:56.880 "tgt_name": "foobar", 00:09:56.880 "method": "nvmf_create_subsystem", 00:09:56.880 "req_id": 1 00:09:56.880 } 00:09:56.880 Got JSON-RPC error response 00:09:56.880 response: 00:09:56.880 { 00:09:56.880 "code": -32603, 00:09:56.880 "message": "Unable to find target foobar" 00:09:56.880 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:56.880 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:56.880 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31637 00:09:56.880 [2024-07-16 00:36:14.697386] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31637: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:57.138 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:57.138 { 00:09:57.138 "nqn": "nqn.2016-06.io.spdk:cnode31637", 00:09:57.138 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:57.138 "method": "nvmf_create_subsystem", 00:09:57.138 "req_id": 1 00:09:57.138 } 00:09:57.138 Got JSON-RPC error response 00:09:57.138 response: 00:09:57.138 { 00:09:57.138 "code": -32602, 00:09:57.138 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:57.138 }' 00:09:57.138 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:57.138 { 00:09:57.138 "nqn": "nqn.2016-06.io.spdk:cnode31637", 00:09:57.138 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:57.138 "method": "nvmf_create_subsystem", 00:09:57.138 "req_id": 1 00:09:57.138 } 00:09:57.138 Got JSON-RPC error response 00:09:57.138 response: 00:09:57.138 { 00:09:57.138 "code": -32602, 00:09:57.138 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:57.138 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:57.138 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:57.138 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9620 00:09:57.138 [2024-07-16 00:36:14.958335] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9620: invalid model number 'SPDK_Controller' 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:57.398 { 00:09:57.398 "nqn": "nqn.2016-06.io.spdk:cnode9620", 00:09:57.398 "model_number": "SPDK_Controller\u001f", 00:09:57.398 "method": "nvmf_create_subsystem", 00:09:57.398 "req_id": 1 00:09:57.398 } 00:09:57.398 Got JSON-RPC error response 00:09:57.398 response: 00:09:57.398 { 00:09:57.398 "code": -32602, 00:09:57.398 "message": "Invalid MN SPDK_Controller\u001f" 00:09:57.398 }' 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:57.398 { 00:09:57.398 "nqn": "nqn.2016-06.io.spdk:cnode9620", 00:09:57.398 "model_number": "SPDK_Controller\u001f", 00:09:57.398 "method": "nvmf_create_subsystem", 00:09:57.398 "req_id": 1 00:09:57.398 } 00:09:57.398 Got JSON-RPC error response 00:09:57.398 response: 00:09:57.398 { 00:09:57.398 "code": -32602, 00:09:57.398 "message": "Invalid MN SPDK_Controller\u001f" 00:09:57.398 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:57.398 00:36:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:57.398 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+jR0sJbx#'\''(PXj0'\''}dD$>' 00:09:57.399 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+jR0sJbx#'\''(PXj0'\''}dD$>' nqn.2016-06.io.spdk:cnode8914 00:09:57.658 [2024-07-16 00:36:15.363895] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8914: invalid serial number '+jR0sJbx#'(PXj0'}dD$>' 00:09:57.658 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:57.658 { 00:09:57.658 "nqn": "nqn.2016-06.io.spdk:cnode8914", 00:09:57.658 "serial_number": "+jR0sJbx#'\''(PXj0'\''}dD$>", 00:09:57.658 "method": "nvmf_create_subsystem", 00:09:57.658 "req_id": 1 00:09:57.658 } 00:09:57.658 Got JSON-RPC error response 00:09:57.658 response: 00:09:57.658 { 00:09:57.658 "code": -32602, 00:09:57.658 "message": "Invalid SN +jR0sJbx#'\''(PXj0'\''}dD$>" 00:09:57.658 }' 00:09:57.658 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:57.658 { 00:09:57.658 "nqn": "nqn.2016-06.io.spdk:cnode8914", 00:09:57.658 "serial_number": "+jR0sJbx#'(PXj0'}dD$>", 00:09:57.658 "method": "nvmf_create_subsystem", 00:09:57.658 "req_id": 1 00:09:57.658 } 00:09:57.658 Got JSON-RPC error response 00:09:57.658 response: 00:09:57.658 { 00:09:57.658 "code": -32602, 00:09:57.658 "message": "Invalid SN +jR0sJbx#'(PXj0'}dD$>" 00:09:57.658 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:57.658 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:57.658 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:57.659 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.919 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '?+>54:= %@Z: TA_{|ZPedbT}' 00:09:57.920 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '?+>54:= %@Z: TA_{|ZPedbT}' nqn.2016-06.io.spdk:cnode19225 00:09:58.180 [2024-07-16 00:36:15.894067] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19225: invalid model number '?+>54:= %@Z: TA_{|ZPedbT}' 00:09:58.180 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:58.180 { 00:09:58.180 "nqn": "nqn.2016-06.io.spdk:cnode19225", 00:09:58.180 "model_number": "?+>54:= %@Z: TA_{|ZPedbT}", 00:09:58.180 "method": "nvmf_create_subsystem", 00:09:58.180 "req_id": 1 00:09:58.180 } 00:09:58.180 Got JSON-RPC error response 00:09:58.180 response: 00:09:58.180 { 00:09:58.180 "code": -32602, 00:09:58.180 "message": "Invalid MN ?+>54:= %@Z: TA_{|ZPedbT}" 00:09:58.180 }' 00:09:58.180 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:58.180 { 00:09:58.180 "nqn": "nqn.2016-06.io.spdk:cnode19225", 00:09:58.180 "model_number": "?+>54:= %@Z: TA_{|ZPedbT}", 00:09:58.180 "method": "nvmf_create_subsystem", 00:09:58.180 "req_id": 1 00:09:58.180 } 00:09:58.180 Got JSON-RPC error response 00:09:58.180 response: 00:09:58.180 { 00:09:58.180 "code": -32602, 00:09:58.180 "message": "Invalid MN ?+>54:= %@Z: TA_{|ZPedbT}" 00:09:58.180 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:58.180 00:36:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:58.439 [2024-07-16 00:36:16.155173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.439 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:58.698 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:58.698 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:58.698 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:58.698 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:58.698 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:58.958 [2024-07-16 00:36:16.693330] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:58.958 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:58.958 { 00:09:58.958 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:58.958 "listen_address": { 00:09:58.958 "trtype": "tcp", 00:09:58.958 "traddr": "", 00:09:58.958 "trsvcid": "4421" 00:09:58.958 }, 00:09:58.958 "method": "nvmf_subsystem_remove_listener", 00:09:58.958 "req_id": 1 00:09:58.958 } 00:09:58.958 Got JSON-RPC error response 00:09:58.958 response: 00:09:58.958 { 00:09:58.958 "code": -32602, 00:09:58.958 "message": "Invalid parameters" 00:09:58.958 }' 00:09:58.958 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:58.958 { 00:09:58.958 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:58.958 "listen_address": { 00:09:58.958 "trtype": "tcp", 00:09:58.958 "traddr": "", 00:09:58.958 "trsvcid": "4421" 00:09:58.958 }, 00:09:58.958 "method": "nvmf_subsystem_remove_listener", 00:09:58.958 "req_id": 1 00:09:58.958 } 00:09:58.958 Got JSON-RPC error response 00:09:58.958 response: 00:09:58.958 { 00:09:58.958 "code": -32602, 00:09:58.958 "message": "Invalid parameters" 00:09:58.958 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:58.958 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode925 -i 0 00:09:59.217 [2024-07-16 00:36:16.954303] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode925: invalid cntlid range [0-65519] 00:09:59.217 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:59.217 { 00:09:59.217 "nqn": "nqn.2016-06.io.spdk:cnode925", 00:09:59.217 "min_cntlid": 0, 00:09:59.217 "method": "nvmf_create_subsystem", 00:09:59.217 "req_id": 1 00:09:59.217 } 00:09:59.217 Got JSON-RPC error response 00:09:59.217 response: 00:09:59.217 { 00:09:59.217 "code": -32602, 00:09:59.217 "message": "Invalid cntlid range [0-65519]" 00:09:59.217 }' 00:09:59.217 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:59.217 { 00:09:59.217 "nqn": "nqn.2016-06.io.spdk:cnode925", 00:09:59.217 "min_cntlid": 0, 00:09:59.217 "method": "nvmf_create_subsystem", 00:09:59.217 "req_id": 1 00:09:59.217 } 00:09:59.217 Got JSON-RPC error response 00:09:59.217 response: 00:09:59.217 { 00:09:59.217 "code": -32602, 00:09:59.218 "message": "Invalid cntlid range [0-65519]" 00:09:59.218 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:59.218 00:36:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11011 -i 65520 00:09:59.477 [2024-07-16 00:36:17.215330] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11011: invalid cntlid range [65520-65519] 00:09:59.477 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:59.477 { 00:09:59.477 "nqn": "nqn.2016-06.io.spdk:cnode11011", 00:09:59.477 "min_cntlid": 65520, 00:09:59.477 "method": "nvmf_create_subsystem", 00:09:59.477 "req_id": 1 00:09:59.477 } 00:09:59.477 Got JSON-RPC error response 00:09:59.477 response: 00:09:59.477 { 00:09:59.477 "code": -32602, 00:09:59.477 "message": "Invalid cntlid range [65520-65519]" 00:09:59.477 }' 00:09:59.477 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:59.477 { 00:09:59.477 "nqn": "nqn.2016-06.io.spdk:cnode11011", 00:09:59.477 "min_cntlid": 65520, 00:09:59.477 "method": "nvmf_create_subsystem", 00:09:59.477 "req_id": 1 00:09:59.477 } 00:09:59.477 Got JSON-RPC error response 00:09:59.477 response: 00:09:59.477 { 00:09:59.477 "code": -32602, 00:09:59.477 "message": "Invalid cntlid range [65520-65519]" 00:09:59.477 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:59.477 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode650 -I 0 00:09:59.736 [2024-07-16 00:36:17.472334] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode650: invalid cntlid range [1-0] 00:09:59.736 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:59.736 { 00:09:59.736 "nqn": "nqn.2016-06.io.spdk:cnode650", 00:09:59.736 "max_cntlid": 0, 00:09:59.736 "method": "nvmf_create_subsystem", 00:09:59.736 "req_id": 1 00:09:59.736 } 00:09:59.736 Got JSON-RPC error response 00:09:59.736 response: 00:09:59.736 { 00:09:59.736 "code": -32602, 00:09:59.736 "message": "Invalid cntlid range [1-0]" 00:09:59.736 }' 00:09:59.736 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:59.736 { 00:09:59.736 "nqn": "nqn.2016-06.io.spdk:cnode650", 00:09:59.736 "max_cntlid": 0, 00:09:59.736 "method": "nvmf_create_subsystem", 00:09:59.736 "req_id": 1 00:09:59.736 } 00:09:59.736 Got JSON-RPC error response 00:09:59.736 response: 00:09:59.736 { 00:09:59.736 "code": -32602, 00:09:59.736 "message": "Invalid cntlid range [1-0]" 00:09:59.736 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:59.736 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24569 -I 65520 00:09:59.995 [2024-07-16 00:36:17.725317] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24569: invalid cntlid range [1-65520] 00:09:59.995 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:59.995 { 00:09:59.995 "nqn": "nqn.2016-06.io.spdk:cnode24569", 00:09:59.995 "max_cntlid": 65520, 00:09:59.995 "method": "nvmf_create_subsystem", 00:09:59.995 "req_id": 1 00:09:59.995 } 00:09:59.995 Got JSON-RPC error response 00:09:59.995 response: 00:09:59.995 { 00:09:59.995 "code": -32602, 00:09:59.995 "message": "Invalid cntlid range [1-65520]" 00:09:59.995 }' 00:09:59.995 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:59.995 { 00:09:59.995 "nqn": "nqn.2016-06.io.spdk:cnode24569", 00:09:59.995 "max_cntlid": 65520, 00:09:59.995 "method": "nvmf_create_subsystem", 00:09:59.995 "req_id": 1 00:09:59.995 } 00:09:59.995 Got JSON-RPC error response 00:09:59.995 response: 00:09:59.995 { 00:09:59.995 "code": -32602, 00:09:59.995 "message": "Invalid cntlid range [1-65520]" 00:09:59.995 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:59.995 00:36:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10732 -i 6 -I 5 00:10:00.252 [2024-07-16 00:36:17.990344] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10732: invalid cntlid range [6-5] 00:10:00.252 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:00.252 { 00:10:00.252 "nqn": "nqn.2016-06.io.spdk:cnode10732", 00:10:00.252 "min_cntlid": 6, 00:10:00.252 "max_cntlid": 5, 00:10:00.252 "method": "nvmf_create_subsystem", 00:10:00.252 "req_id": 1 00:10:00.252 } 00:10:00.252 Got JSON-RPC error response 00:10:00.252 response: 00:10:00.252 { 00:10:00.253 "code": -32602, 00:10:00.253 "message": "Invalid cntlid range [6-5]" 00:10:00.253 }' 00:10:00.253 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:00.253 { 00:10:00.253 "nqn": "nqn.2016-06.io.spdk:cnode10732", 00:10:00.253 "min_cntlid": 6, 00:10:00.253 "max_cntlid": 5, 00:10:00.253 "method": "nvmf_create_subsystem", 00:10:00.253 "req_id": 1 00:10:00.253 } 00:10:00.253 Got JSON-RPC error response 00:10:00.253 response: 00:10:00.253 { 00:10:00.253 "code": -32602, 00:10:00.253 "message": "Invalid cntlid range [6-5]" 00:10:00.253 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:00.253 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:00.511 { 00:10:00.511 "name": "foobar", 00:10:00.511 "method": "nvmf_delete_target", 00:10:00.511 "req_id": 1 00:10:00.511 } 00:10:00.511 Got JSON-RPC error response 00:10:00.511 response: 00:10:00.511 { 00:10:00.511 "code": -32602, 00:10:00.511 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:00.511 }' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:00.511 { 00:10:00.511 "name": "foobar", 00:10:00.511 "method": "nvmf_delete_target", 00:10:00.511 "req_id": 1 00:10:00.511 } 00:10:00.511 Got JSON-RPC error response 00:10:00.511 response: 00:10:00.511 { 00:10:00.511 "code": -32602, 00:10:00.511 "message": "The specified target doesn't exist, cannot delete it." 00:10:00.511 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:00.511 rmmod nvme_tcp 00:10:00.511 rmmod nvme_fabrics 00:10:00.511 rmmod nvme_keyring 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2910606 ']' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2910606 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2910606 ']' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2910606 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2910606 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2910606' 00:10:00.511 killing process with pid 2910606 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2910606 00:10:00.511 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2910606 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.769 00:36:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.674 00:36:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:02.674 00:10:02.674 real 0m13.391s 00:10:02.674 user 0m24.984s 00:10:02.674 sys 0m5.560s 00:10:02.674 00:36:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.674 00:36:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:02.674 ************************************ 00:10:02.674 END TEST nvmf_invalid 00:10:02.674 ************************************ 00:10:02.933 00:36:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:02.933 00:36:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:02.933 00:36:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:02.933 00:36:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.933 00:36:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:02.933 ************************************ 00:10:02.933 START TEST nvmf_abort 00:10:02.933 ************************************ 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:02.933 * Looking for test storage... 00:10:02.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.933 00:36:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.500 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.500 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.500 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:09.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:09.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:09.501 Found net devices under 0000:af:00.0: cvl_0_0 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:09.501 Found net devices under 0000:af:00.1: cvl_0_1 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:10:09.501 00:10:09.501 --- 10.0.0.2 ping statistics --- 00:10:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.501 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:09.501 00:10:09.501 --- 10.0.0.1 ping statistics --- 00:10:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.501 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2915293 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2915293 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2915293 ']' 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.501 00:36:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:09.501 [2024-07-16 00:36:26.625716] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:09.501 [2024-07-16 00:36:26.625772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.501 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.501 [2024-07-16 00:36:26.714488] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.501 [2024-07-16 00:36:26.820429] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.501 [2024-07-16 00:36:26.820480] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.501 [2024-07-16 00:36:26.820493] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.502 [2024-07-16 00:36:26.820504] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.502 [2024-07-16 00:36:26.820514] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.502 [2024-07-16 00:36:26.820634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.502 [2024-07-16 00:36:26.820747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.502 [2024-07-16 00:36:26.820745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.764 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.765 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:09.765 00:36:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.765 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.765 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 [2024-07-16 00:36:27.626830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 Malloc0 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 Delay0 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 [2024-07-16 00:36:27.707949] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.024 00:36:27 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:10.024 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.024 [2024-07-16 00:36:27.827704] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:12.560 Initializing NVMe Controllers 00:10:12.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:12.560 controller IO queue size 128 less than required 00:10:12.560 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:12.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:12.560 Initialization complete. Launching workers. 00:10:12.560 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29674 00:10:12.560 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29735, failed to submit 62 00:10:12.560 success 29678, unsuccess 57, failed 0 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:12.560 00:36:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:12.560 rmmod nvme_tcp 00:10:12.560 rmmod nvme_fabrics 00:10:12.560 rmmod nvme_keyring 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2915293 ']' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2915293 ']' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2915293' 00:10:12.560 killing process with pid 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2915293 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.560 00:36:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.101 00:36:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:15.101 00:10:15.101 real 0m11.878s 00:10:15.101 user 0m14.036s 00:10:15.101 sys 0m5.487s 00:10:15.101 00:36:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.101 00:36:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:15.101 ************************************ 00:10:15.101 END TEST nvmf_abort 00:10:15.101 ************************************ 00:10:15.101 00:36:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:15.101 00:36:32 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:15.101 00:36:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:15.101 00:36:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.101 00:36:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:15.101 ************************************ 00:10:15.101 START TEST nvmf_ns_hotplug_stress 00:10:15.101 ************************************ 00:10:15.101 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:15.101 * Looking for test storage... 00:10:15.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.102 00:36:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:20.419 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:20.419 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:20.419 Found net devices under 0000:af:00.0: cvl_0_0 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:20.419 Found net devices under 0000:af:00.1: cvl_0_1 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.419 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:20.679 00:10:20.679 --- 10.0.0.2 ping statistics --- 00:10:20.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.679 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:10:20.679 00:10:20.679 --- 10.0.0.1 ping statistics --- 00:10:20.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.679 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.679 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2919596 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2919596 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2919596 ']' 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.938 00:36:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.938 [2024-07-16 00:36:38.601459] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:10:20.938 [2024-07-16 00:36:38.601501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.938 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.938 [2024-07-16 00:36:38.676710] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.196 [2024-07-16 00:36:38.779989] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.196 [2024-07-16 00:36:38.780042] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.196 [2024-07-16 00:36:38.780055] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.196 [2024-07-16 00:36:38.780066] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.196 [2024-07-16 00:36:38.780076] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.196 [2024-07-16 00:36:38.780204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.196 [2024-07-16 00:36:38.780316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.196 [2024-07-16 00:36:38.780319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:21.763 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.021 [2024-07-16 00:36:39.740909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.021 00:36:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:22.278 00:36:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.536 [2024-07-16 00:36:40.268744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.536 00:36:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.793 00:36:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:23.050 Malloc0 00:10:23.050 00:36:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:23.307 Delay0 00:10:23.307 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.564 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:23.821 NULL1 00:10:23.821 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:24.079 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2920369 00:10:24.079 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:24.079 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:24.079 00:36:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.451 Read completed with error (sct=0, sc=11) 00:10:25.451 00:36:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.451 00:36:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:25.451 00:36:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:25.709 true 00:10:25.709 00:36:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:25.709 00:36:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.646 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.904 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:26.904 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:26.904 true 00:10:26.904 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:26.904 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.163 00:36:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.422 00:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:27.422 00:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:27.681 true 00:10:27.681 00:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:27.681 00:36:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.618 00:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.877 00:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:28.877 00:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:29.137 true 00:10:29.137 00:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:29.137 00:36:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.396 00:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.655 00:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:29.655 00:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:29.913 true 00:10:29.913 00:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:29.913 00:36:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.850 00:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.108 00:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:31.108 00:36:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:31.366 true 00:10:31.366 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:31.366 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.625 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.884 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:31.884 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:32.142 true 00:10:32.142 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:32.142 00:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.078 00:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.336 00:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:33.337 00:36:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:33.595 true 00:10:33.595 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:33.595 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.853 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.112 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:34.112 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:34.370 true 00:10:34.370 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:34.370 00:36:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.306 00:36:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.306 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:35.306 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:35.565 true 00:10:35.565 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:35.565 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.823 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.081 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:36.081 00:36:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:36.339 true 00:10:36.340 00:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:36.340 00:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.272 00:36:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.530 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:37.530 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:37.788 true 00:10:37.788 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:37.788 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.046 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.328 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:38.328 00:36:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:38.586 true 00:10:38.586 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:38.586 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.843 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.100 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:39.100 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:39.358 true 00:10:39.358 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:39.358 00:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.294 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.553 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:40.553 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:40.812 true 00:10:40.812 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:40.812 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.071 00:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.330 00:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:41.330 00:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:41.588 true 00:10:41.588 00:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:41.588 00:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.525 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.525 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:42.525 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:42.785 true 00:10:42.785 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:42.785 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.043 00:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.302 00:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:43.302 00:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:43.561 true 00:10:43.561 00:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:43.561 00:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.497 00:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.757 00:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:44.757 00:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:45.015 true 00:10:45.015 00:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:45.015 00:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.273 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.531 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:45.531 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:45.790 true 00:10:45.790 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:45.790 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.048 00:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.305 00:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:46.305 00:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:46.564 true 00:10:46.564 00:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:46.564 00:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.759 00:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.086 00:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:48.086 00:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:48.086 true 00:10:48.086 00:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:48.086 00:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.387 00:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.645 00:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:48.645 00:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:48.903 true 00:10:48.903 00:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:48.903 00:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.838 00:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.096 00:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:50.096 00:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:50.354 true 00:10:50.354 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:50.354 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.613 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.871 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:50.871 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:51.129 true 00:10:51.129 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:51.129 00:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.067 00:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.326 00:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:52.326 00:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:52.584 true 00:10:52.584 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:52.584 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.843 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.102 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:53.102 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:53.360 true 00:10:53.360 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:53.360 00:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.298 00:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.298 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:54.298 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:54.298 Initializing NVMe Controllers 00:10:54.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.298 Controller IO queue size 128, less than required. 00:10:54.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:54.298 Controller IO queue size 128, less than required. 00:10:54.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:54.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:54.298 Initialization complete. Launching workers. 00:10:54.298 ======================================================== 00:10:54.298 Latency(us) 00:10:54.298 Device Information : IOPS MiB/s Average min max 00:10:54.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 612.38 0.30 111794.12 3833.05 1020910.96 00:10:54.298 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4458.78 2.18 28716.19 10353.25 582237.30 00:10:54.298 ======================================================== 00:10:54.298 Total : 5071.17 2.48 38748.52 3833.05 1020910.96 00:10:54.298 00:10:54.557 true 00:10:54.557 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2920369 00:10:54.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2920369) - No such process 00:10:54.557 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2920369 00:10:54.557 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.816 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.074 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:55.074 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:55.074 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:55.074 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.074 00:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:55.333 null0 00:10:55.333 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.333 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.333 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:55.593 null1 00:10:55.593 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.593 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.593 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:55.852 null2 00:10:55.852 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.852 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.852 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:56.110 null3 00:10:56.110 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.110 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.111 00:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:56.369 null4 00:10:56.369 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.369 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.369 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:56.627 null5 00:10:56.627 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.627 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.627 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:56.887 null6 00:10:56.887 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:56.887 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:56.887 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:57.147 null7 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:57.147 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2926272 2926274 2926277 2926280 2926283 2926286 2926289 2926292 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.148 00:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.407 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.666 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.924 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.182 00:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.441 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.700 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.958 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.958 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.958 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.958 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.959 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.216 00:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.216 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:59.517 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:59.776 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.034 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.292 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.293 00:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.293 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.293 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.293 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.293 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.552 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:00.811 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.070 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.329 00:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.329 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:01.588 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.847 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.106 00:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.364 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.364 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.364 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:02.622 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:02.880 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.141 rmmod nvme_tcp 00:11:03.141 rmmod nvme_fabrics 00:11:03.141 rmmod nvme_keyring 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2919596 ']' 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2919596 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2919596 ']' 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2919596 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919596 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919596' 00:11:03.141 killing process with pid 2919596 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2919596 00:11:03.141 00:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2919596 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.400 00:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.330 00:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:05.330 00:11:05.330 real 0m50.628s 00:11:05.330 user 3m32.442s 00:11:05.330 sys 0m16.038s 00:11:05.330 00:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.330 00:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 ************************************ 00:11:05.330 END TEST nvmf_ns_hotplug_stress 00:11:05.330 ************************************ 00:11:05.592 00:37:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:05.592 00:37:23 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:05.592 00:37:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:05.592 00:37:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.592 00:37:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.592 ************************************ 00:11:05.592 START TEST nvmf_connect_stress 00:11:05.592 ************************************ 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:05.592 * Looking for test storage... 00:11:05.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:05.592 00:37:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.165 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.165 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:12.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:12.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:12.166 Found net devices under 0000:af:00.0: cvl_0_0 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:12.166 Found net devices under 0000:af:00.1: cvl_0_1 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.166 00:37:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:11:12.166 00:11:12.166 --- 10.0.0.2 ping statistics --- 00:11:12.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.166 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:11:12.166 00:11:12.166 --- 10.0.0.1 ping statistics --- 00:11:12.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.166 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2931151 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2931151 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2931151 ']' 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.166 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.166 [2024-07-16 00:37:29.116333] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:11:12.167 [2024-07-16 00:37:29.116387] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.167 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.167 [2024-07-16 00:37:29.204443] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:12.167 [2024-07-16 00:37:29.311593] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.167 [2024-07-16 00:37:29.311638] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.167 [2024-07-16 00:37:29.311651] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.167 [2024-07-16 00:37:29.311662] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.167 [2024-07-16 00:37:29.311671] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.167 [2024-07-16 00:37:29.311801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.167 [2024-07-16 00:37:29.311912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.167 [2024-07-16 00:37:29.311914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.167 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.167 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:12.167 00:37:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.167 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.167 00:37:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.426 [2024-07-16 00:37:30.037892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.426 [2024-07-16 00:37:30.070406] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.426 NULL1 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2931427 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.426 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.685 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.685 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:12.685 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.685 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.685 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.253 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.253 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:13.253 00:37:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.253 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.253 00:37:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.512 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.512 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:13.512 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.512 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.512 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.771 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.771 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:13.771 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.771 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.771 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.030 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.030 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:14.030 00:37:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.030 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.030 00:37:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.290 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.290 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:14.290 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.290 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.290 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.856 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.856 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:14.856 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.856 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.856 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.114 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.114 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:15.114 00:37:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.114 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.114 00:37:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.372 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.372 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:15.372 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.372 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.372 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.632 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.632 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:15.632 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.632 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.632 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.201 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.201 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:16.201 00:37:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.201 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.201 00:37:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.461 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.461 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:16.461 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.461 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.461 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.720 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.720 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:16.720 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.720 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.720 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.978 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.978 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:16.978 00:37:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.978 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.978 00:37:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.237 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.237 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:17.237 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.237 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.237 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.803 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.803 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:17.803 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.803 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.803 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.062 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.062 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:18.062 00:37:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.062 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.062 00:37:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.320 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.320 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:18.320 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.320 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.320 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.578 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.578 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:18.578 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.578 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.578 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.145 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.145 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:19.145 00:37:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.145 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.145 00:37:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.402 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.402 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:19.402 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.402 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.402 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.660 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.660 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:19.660 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.660 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.660 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.918 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.918 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:19.918 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.918 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.918 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.176 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.176 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:20.176 00:37:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.177 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.177 00:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.744 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.744 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:20.744 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.744 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.744 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.031 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.031 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:21.031 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.031 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.031 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.311 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.311 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:21.311 00:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.312 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.312 00:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.571 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.571 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:21.571 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.571 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.571 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.829 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.829 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:21.829 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.829 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.829 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.397 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.397 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:22.397 00:37:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.397 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.397 00:37:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.397 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2931427 00:11:22.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2931427) - No such process 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2931427 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.656 rmmod nvme_tcp 00:11:22.656 rmmod nvme_fabrics 00:11:22.656 rmmod nvme_keyring 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2931151 ']' 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2931151 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2931151 ']' 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2931151 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2931151 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:22.656 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:22.657 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2931151' 00:11:22.657 killing process with pid 2931151 00:11:22.657 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2931151 00:11:22.657 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2931151 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.916 00:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.452 00:37:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.452 00:11:25.452 real 0m19.532s 00:11:25.452 user 0m41.726s 00:11:25.452 sys 0m8.042s 00:11:25.452 00:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.452 00:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.452 ************************************ 00:11:25.452 END TEST nvmf_connect_stress 00:11:25.452 ************************************ 00:11:25.452 00:37:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:25.452 00:37:42 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:25.452 00:37:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.452 00:37:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.452 00:37:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.452 ************************************ 00:11:25.452 START TEST nvmf_fused_ordering 00:11:25.452 ************************************ 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:25.452 * Looking for test storage... 00:11:25.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.452 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.453 00:37:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:30.725 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:30.725 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:30.725 Found net devices under 0000:af:00.0: cvl_0_0 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:30.725 Found net devices under 0000:af:00.1: cvl_0_1 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:30.725 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:30.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:11:30.985 00:11:30.985 --- 10.0.0.2 ping statistics --- 00:11:30.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.985 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:30.985 00:11:30.985 --- 10.0.0.1 ping statistics --- 00:11:30.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.985 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2936820 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2936820 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2936820 ']' 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.985 00:37:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.985 [2024-07-16 00:37:48.724094] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:11:30.985 [2024-07-16 00:37:48.724136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.985 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.985 [2024-07-16 00:37:48.799522] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.244 [2024-07-16 00:37:48.906028] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.244 [2024-07-16 00:37:48.906077] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.244 [2024-07-16 00:37:48.906090] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.244 [2024-07-16 00:37:48.906102] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.244 [2024-07-16 00:37:48.906111] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.244 [2024-07-16 00:37:48.906138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.181 [2024-07-16 00:37:49.731691] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.181 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.182 [2024-07-16 00:37:49.751889] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.182 NULL1 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.182 00:37:49 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:32.182 [2024-07-16 00:37:49.807058] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:11:32.182 [2024-07-16 00:37:49.807095] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937025 ] 00:11:32.182 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.440 Attached to nqn.2016-06.io.spdk:cnode1 00:11:32.440 Namespace ID: 1 size: 1GB 00:11:32.440 fused_ordering(0) 00:11:32.440 fused_ordering(1) 00:11:32.440 fused_ordering(2) 00:11:32.440 fused_ordering(3) 00:11:32.440 fused_ordering(4) 00:11:32.440 fused_ordering(5) 00:11:32.440 fused_ordering(6) 00:11:32.440 fused_ordering(7) 00:11:32.440 fused_ordering(8) 00:11:32.440 fused_ordering(9) 00:11:32.440 fused_ordering(10) 00:11:32.440 fused_ordering(11) 00:11:32.440 fused_ordering(12) 00:11:32.440 fused_ordering(13) 00:11:32.440 fused_ordering(14) 00:11:32.440 fused_ordering(15) 00:11:32.440 fused_ordering(16) 00:11:32.440 fused_ordering(17) 00:11:32.440 fused_ordering(18) 00:11:32.440 fused_ordering(19) 00:11:32.440 fused_ordering(20) 00:11:32.440 fused_ordering(21) 00:11:32.440 fused_ordering(22) 00:11:32.440 fused_ordering(23) 00:11:32.440 fused_ordering(24) 00:11:32.440 fused_ordering(25) 00:11:32.440 fused_ordering(26) 00:11:32.440 fused_ordering(27) 00:11:32.440 fused_ordering(28) 00:11:32.440 fused_ordering(29) 00:11:32.440 fused_ordering(30) 00:11:32.440 fused_ordering(31) 00:11:32.440 fused_ordering(32) 00:11:32.440 fused_ordering(33) 00:11:32.440 fused_ordering(34) 00:11:32.440 fused_ordering(35) 00:11:32.440 fused_ordering(36) 00:11:32.440 fused_ordering(37) 00:11:32.440 fused_ordering(38) 00:11:32.440 fused_ordering(39) 00:11:32.440 fused_ordering(40) 00:11:32.440 fused_ordering(41) 00:11:32.440 fused_ordering(42) 00:11:32.440 fused_ordering(43) 00:11:32.440 fused_ordering(44) 00:11:32.440 fused_ordering(45) 00:11:32.440 fused_ordering(46) 00:11:32.440 fused_ordering(47) 00:11:32.440 fused_ordering(48) 00:11:32.440 fused_ordering(49) 00:11:32.440 fused_ordering(50) 00:11:32.440 fused_ordering(51) 00:11:32.440 fused_ordering(52) 00:11:32.440 fused_ordering(53) 00:11:32.440 fused_ordering(54) 00:11:32.440 fused_ordering(55) 00:11:32.440 fused_ordering(56) 00:11:32.440 fused_ordering(57) 00:11:32.440 fused_ordering(58) 00:11:32.440 fused_ordering(59) 00:11:32.440 fused_ordering(60) 00:11:32.440 fused_ordering(61) 00:11:32.440 fused_ordering(62) 00:11:32.440 fused_ordering(63) 00:11:32.440 fused_ordering(64) 00:11:32.440 fused_ordering(65) 00:11:32.440 fused_ordering(66) 00:11:32.440 fused_ordering(67) 00:11:32.440 fused_ordering(68) 00:11:32.440 fused_ordering(69) 00:11:32.440 fused_ordering(70) 00:11:32.440 fused_ordering(71) 00:11:32.440 fused_ordering(72) 00:11:32.440 fused_ordering(73) 00:11:32.440 fused_ordering(74) 00:11:32.440 fused_ordering(75) 00:11:32.440 fused_ordering(76) 00:11:32.440 fused_ordering(77) 00:11:32.440 fused_ordering(78) 00:11:32.440 fused_ordering(79) 00:11:32.440 fused_ordering(80) 00:11:32.440 fused_ordering(81) 00:11:32.440 fused_ordering(82) 00:11:32.440 fused_ordering(83) 00:11:32.440 fused_ordering(84) 00:11:32.440 fused_ordering(85) 00:11:32.440 fused_ordering(86) 00:11:32.440 fused_ordering(87) 00:11:32.440 fused_ordering(88) 00:11:32.440 fused_ordering(89) 00:11:32.440 fused_ordering(90) 00:11:32.440 fused_ordering(91) 00:11:32.441 fused_ordering(92) 00:11:32.441 fused_ordering(93) 00:11:32.441 fused_ordering(94) 00:11:32.441 fused_ordering(95) 00:11:32.441 fused_ordering(96) 00:11:32.441 fused_ordering(97) 00:11:32.441 fused_ordering(98) 00:11:32.441 fused_ordering(99) 00:11:32.441 fused_ordering(100) 00:11:32.441 fused_ordering(101) 00:11:32.441 fused_ordering(102) 00:11:32.441 fused_ordering(103) 00:11:32.441 fused_ordering(104) 00:11:32.441 fused_ordering(105) 00:11:32.441 fused_ordering(106) 00:11:32.441 fused_ordering(107) 00:11:32.441 fused_ordering(108) 00:11:32.441 fused_ordering(109) 00:11:32.441 fused_ordering(110) 00:11:32.441 fused_ordering(111) 00:11:32.441 fused_ordering(112) 00:11:32.441 fused_ordering(113) 00:11:32.441 fused_ordering(114) 00:11:32.441 fused_ordering(115) 00:11:32.441 fused_ordering(116) 00:11:32.441 fused_ordering(117) 00:11:32.441 fused_ordering(118) 00:11:32.441 fused_ordering(119) 00:11:32.441 fused_ordering(120) 00:11:32.441 fused_ordering(121) 00:11:32.441 fused_ordering(122) 00:11:32.441 fused_ordering(123) 00:11:32.441 fused_ordering(124) 00:11:32.441 fused_ordering(125) 00:11:32.441 fused_ordering(126) 00:11:32.441 fused_ordering(127) 00:11:32.441 fused_ordering(128) 00:11:32.441 fused_ordering(129) 00:11:32.441 fused_ordering(130) 00:11:32.441 fused_ordering(131) 00:11:32.441 fused_ordering(132) 00:11:32.441 fused_ordering(133) 00:11:32.441 fused_ordering(134) 00:11:32.441 fused_ordering(135) 00:11:32.441 fused_ordering(136) 00:11:32.441 fused_ordering(137) 00:11:32.441 fused_ordering(138) 00:11:32.441 fused_ordering(139) 00:11:32.441 fused_ordering(140) 00:11:32.441 fused_ordering(141) 00:11:32.441 fused_ordering(142) 00:11:32.441 fused_ordering(143) 00:11:32.441 fused_ordering(144) 00:11:32.441 fused_ordering(145) 00:11:32.441 fused_ordering(146) 00:11:32.441 fused_ordering(147) 00:11:32.441 fused_ordering(148) 00:11:32.441 fused_ordering(149) 00:11:32.441 fused_ordering(150) 00:11:32.441 fused_ordering(151) 00:11:32.441 fused_ordering(152) 00:11:32.441 fused_ordering(153) 00:11:32.441 fused_ordering(154) 00:11:32.441 fused_ordering(155) 00:11:32.441 fused_ordering(156) 00:11:32.441 fused_ordering(157) 00:11:32.441 fused_ordering(158) 00:11:32.441 fused_ordering(159) 00:11:32.441 fused_ordering(160) 00:11:32.441 fused_ordering(161) 00:11:32.441 fused_ordering(162) 00:11:32.441 fused_ordering(163) 00:11:32.441 fused_ordering(164) 00:11:32.441 fused_ordering(165) 00:11:32.441 fused_ordering(166) 00:11:32.441 fused_ordering(167) 00:11:32.441 fused_ordering(168) 00:11:32.441 fused_ordering(169) 00:11:32.441 fused_ordering(170) 00:11:32.441 fused_ordering(171) 00:11:32.441 fused_ordering(172) 00:11:32.441 fused_ordering(173) 00:11:32.441 fused_ordering(174) 00:11:32.441 fused_ordering(175) 00:11:32.441 fused_ordering(176) 00:11:32.441 fused_ordering(177) 00:11:32.441 fused_ordering(178) 00:11:32.441 fused_ordering(179) 00:11:32.441 fused_ordering(180) 00:11:32.441 fused_ordering(181) 00:11:32.441 fused_ordering(182) 00:11:32.441 fused_ordering(183) 00:11:32.441 fused_ordering(184) 00:11:32.441 fused_ordering(185) 00:11:32.441 fused_ordering(186) 00:11:32.441 fused_ordering(187) 00:11:32.441 fused_ordering(188) 00:11:32.441 fused_ordering(189) 00:11:32.441 fused_ordering(190) 00:11:32.441 fused_ordering(191) 00:11:32.441 fused_ordering(192) 00:11:32.441 fused_ordering(193) 00:11:32.441 fused_ordering(194) 00:11:32.441 fused_ordering(195) 00:11:32.441 fused_ordering(196) 00:11:32.441 fused_ordering(197) 00:11:32.441 fused_ordering(198) 00:11:32.441 fused_ordering(199) 00:11:32.441 fused_ordering(200) 00:11:32.441 fused_ordering(201) 00:11:32.441 fused_ordering(202) 00:11:32.441 fused_ordering(203) 00:11:32.441 fused_ordering(204) 00:11:32.441 fused_ordering(205) 00:11:33.009 fused_ordering(206) 00:11:33.009 fused_ordering(207) 00:11:33.009 fused_ordering(208) 00:11:33.009 fused_ordering(209) 00:11:33.009 fused_ordering(210) 00:11:33.009 fused_ordering(211) 00:11:33.009 fused_ordering(212) 00:11:33.009 fused_ordering(213) 00:11:33.009 fused_ordering(214) 00:11:33.009 fused_ordering(215) 00:11:33.009 fused_ordering(216) 00:11:33.009 fused_ordering(217) 00:11:33.009 fused_ordering(218) 00:11:33.009 fused_ordering(219) 00:11:33.009 fused_ordering(220) 00:11:33.009 fused_ordering(221) 00:11:33.009 fused_ordering(222) 00:11:33.009 fused_ordering(223) 00:11:33.009 fused_ordering(224) 00:11:33.009 fused_ordering(225) 00:11:33.009 fused_ordering(226) 00:11:33.009 fused_ordering(227) 00:11:33.009 fused_ordering(228) 00:11:33.009 fused_ordering(229) 00:11:33.009 fused_ordering(230) 00:11:33.009 fused_ordering(231) 00:11:33.009 fused_ordering(232) 00:11:33.009 fused_ordering(233) 00:11:33.009 fused_ordering(234) 00:11:33.009 fused_ordering(235) 00:11:33.009 fused_ordering(236) 00:11:33.009 fused_ordering(237) 00:11:33.009 fused_ordering(238) 00:11:33.009 fused_ordering(239) 00:11:33.009 fused_ordering(240) 00:11:33.009 fused_ordering(241) 00:11:33.009 fused_ordering(242) 00:11:33.009 fused_ordering(243) 00:11:33.009 fused_ordering(244) 00:11:33.009 fused_ordering(245) 00:11:33.009 fused_ordering(246) 00:11:33.009 fused_ordering(247) 00:11:33.009 fused_ordering(248) 00:11:33.009 fused_ordering(249) 00:11:33.009 fused_ordering(250) 00:11:33.009 fused_ordering(251) 00:11:33.009 fused_ordering(252) 00:11:33.009 fused_ordering(253) 00:11:33.009 fused_ordering(254) 00:11:33.009 fused_ordering(255) 00:11:33.009 fused_ordering(256) 00:11:33.009 fused_ordering(257) 00:11:33.009 fused_ordering(258) 00:11:33.009 fused_ordering(259) 00:11:33.009 fused_ordering(260) 00:11:33.009 fused_ordering(261) 00:11:33.009 fused_ordering(262) 00:11:33.009 fused_ordering(263) 00:11:33.009 fused_ordering(264) 00:11:33.009 fused_ordering(265) 00:11:33.009 fused_ordering(266) 00:11:33.009 fused_ordering(267) 00:11:33.009 fused_ordering(268) 00:11:33.009 fused_ordering(269) 00:11:33.009 fused_ordering(270) 00:11:33.009 fused_ordering(271) 00:11:33.009 fused_ordering(272) 00:11:33.009 fused_ordering(273) 00:11:33.009 fused_ordering(274) 00:11:33.009 fused_ordering(275) 00:11:33.009 fused_ordering(276) 00:11:33.009 fused_ordering(277) 00:11:33.009 fused_ordering(278) 00:11:33.009 fused_ordering(279) 00:11:33.009 fused_ordering(280) 00:11:33.009 fused_ordering(281) 00:11:33.009 fused_ordering(282) 00:11:33.009 fused_ordering(283) 00:11:33.009 fused_ordering(284) 00:11:33.009 fused_ordering(285) 00:11:33.009 fused_ordering(286) 00:11:33.009 fused_ordering(287) 00:11:33.009 fused_ordering(288) 00:11:33.009 fused_ordering(289) 00:11:33.009 fused_ordering(290) 00:11:33.009 fused_ordering(291) 00:11:33.009 fused_ordering(292) 00:11:33.009 fused_ordering(293) 00:11:33.009 fused_ordering(294) 00:11:33.009 fused_ordering(295) 00:11:33.009 fused_ordering(296) 00:11:33.009 fused_ordering(297) 00:11:33.009 fused_ordering(298) 00:11:33.009 fused_ordering(299) 00:11:33.009 fused_ordering(300) 00:11:33.009 fused_ordering(301) 00:11:33.009 fused_ordering(302) 00:11:33.009 fused_ordering(303) 00:11:33.009 fused_ordering(304) 00:11:33.009 fused_ordering(305) 00:11:33.009 fused_ordering(306) 00:11:33.009 fused_ordering(307) 00:11:33.009 fused_ordering(308) 00:11:33.009 fused_ordering(309) 00:11:33.009 fused_ordering(310) 00:11:33.009 fused_ordering(311) 00:11:33.009 fused_ordering(312) 00:11:33.009 fused_ordering(313) 00:11:33.009 fused_ordering(314) 00:11:33.009 fused_ordering(315) 00:11:33.009 fused_ordering(316) 00:11:33.009 fused_ordering(317) 00:11:33.009 fused_ordering(318) 00:11:33.009 fused_ordering(319) 00:11:33.009 fused_ordering(320) 00:11:33.009 fused_ordering(321) 00:11:33.009 fused_ordering(322) 00:11:33.009 fused_ordering(323) 00:11:33.009 fused_ordering(324) 00:11:33.009 fused_ordering(325) 00:11:33.009 fused_ordering(326) 00:11:33.009 fused_ordering(327) 00:11:33.009 fused_ordering(328) 00:11:33.009 fused_ordering(329) 00:11:33.009 fused_ordering(330) 00:11:33.009 fused_ordering(331) 00:11:33.009 fused_ordering(332) 00:11:33.009 fused_ordering(333) 00:11:33.009 fused_ordering(334) 00:11:33.009 fused_ordering(335) 00:11:33.009 fused_ordering(336) 00:11:33.009 fused_ordering(337) 00:11:33.009 fused_ordering(338) 00:11:33.009 fused_ordering(339) 00:11:33.009 fused_ordering(340) 00:11:33.009 fused_ordering(341) 00:11:33.009 fused_ordering(342) 00:11:33.009 fused_ordering(343) 00:11:33.009 fused_ordering(344) 00:11:33.009 fused_ordering(345) 00:11:33.009 fused_ordering(346) 00:11:33.009 fused_ordering(347) 00:11:33.009 fused_ordering(348) 00:11:33.009 fused_ordering(349) 00:11:33.009 fused_ordering(350) 00:11:33.009 fused_ordering(351) 00:11:33.009 fused_ordering(352) 00:11:33.009 fused_ordering(353) 00:11:33.009 fused_ordering(354) 00:11:33.009 fused_ordering(355) 00:11:33.009 fused_ordering(356) 00:11:33.009 fused_ordering(357) 00:11:33.009 fused_ordering(358) 00:11:33.009 fused_ordering(359) 00:11:33.009 fused_ordering(360) 00:11:33.009 fused_ordering(361) 00:11:33.009 fused_ordering(362) 00:11:33.009 fused_ordering(363) 00:11:33.009 fused_ordering(364) 00:11:33.009 fused_ordering(365) 00:11:33.009 fused_ordering(366) 00:11:33.009 fused_ordering(367) 00:11:33.009 fused_ordering(368) 00:11:33.009 fused_ordering(369) 00:11:33.009 fused_ordering(370) 00:11:33.009 fused_ordering(371) 00:11:33.009 fused_ordering(372) 00:11:33.009 fused_ordering(373) 00:11:33.009 fused_ordering(374) 00:11:33.009 fused_ordering(375) 00:11:33.009 fused_ordering(376) 00:11:33.009 fused_ordering(377) 00:11:33.009 fused_ordering(378) 00:11:33.009 fused_ordering(379) 00:11:33.009 fused_ordering(380) 00:11:33.009 fused_ordering(381) 00:11:33.009 fused_ordering(382) 00:11:33.009 fused_ordering(383) 00:11:33.009 fused_ordering(384) 00:11:33.009 fused_ordering(385) 00:11:33.009 fused_ordering(386) 00:11:33.009 fused_ordering(387) 00:11:33.009 fused_ordering(388) 00:11:33.009 fused_ordering(389) 00:11:33.009 fused_ordering(390) 00:11:33.009 fused_ordering(391) 00:11:33.009 fused_ordering(392) 00:11:33.009 fused_ordering(393) 00:11:33.009 fused_ordering(394) 00:11:33.009 fused_ordering(395) 00:11:33.009 fused_ordering(396) 00:11:33.009 fused_ordering(397) 00:11:33.009 fused_ordering(398) 00:11:33.009 fused_ordering(399) 00:11:33.009 fused_ordering(400) 00:11:33.009 fused_ordering(401) 00:11:33.009 fused_ordering(402) 00:11:33.009 fused_ordering(403) 00:11:33.009 fused_ordering(404) 00:11:33.009 fused_ordering(405) 00:11:33.009 fused_ordering(406) 00:11:33.009 fused_ordering(407) 00:11:33.009 fused_ordering(408) 00:11:33.009 fused_ordering(409) 00:11:33.009 fused_ordering(410) 00:11:33.576 fused_ordering(411) 00:11:33.576 fused_ordering(412) 00:11:33.576 fused_ordering(413) 00:11:33.576 fused_ordering(414) 00:11:33.576 fused_ordering(415) 00:11:33.576 fused_ordering(416) 00:11:33.576 fused_ordering(417) 00:11:33.576 fused_ordering(418) 00:11:33.576 fused_ordering(419) 00:11:33.576 fused_ordering(420) 00:11:33.576 fused_ordering(421) 00:11:33.576 fused_ordering(422) 00:11:33.576 fused_ordering(423) 00:11:33.576 fused_ordering(424) 00:11:33.576 fused_ordering(425) 00:11:33.576 fused_ordering(426) 00:11:33.576 fused_ordering(427) 00:11:33.576 fused_ordering(428) 00:11:33.576 fused_ordering(429) 00:11:33.576 fused_ordering(430) 00:11:33.576 fused_ordering(431) 00:11:33.576 fused_ordering(432) 00:11:33.576 fused_ordering(433) 00:11:33.576 fused_ordering(434) 00:11:33.576 fused_ordering(435) 00:11:33.576 fused_ordering(436) 00:11:33.576 fused_ordering(437) 00:11:33.576 fused_ordering(438) 00:11:33.576 fused_ordering(439) 00:11:33.576 fused_ordering(440) 00:11:33.576 fused_ordering(441) 00:11:33.576 fused_ordering(442) 00:11:33.576 fused_ordering(443) 00:11:33.576 fused_ordering(444) 00:11:33.576 fused_ordering(445) 00:11:33.576 fused_ordering(446) 00:11:33.576 fused_ordering(447) 00:11:33.576 fused_ordering(448) 00:11:33.576 fused_ordering(449) 00:11:33.576 fused_ordering(450) 00:11:33.576 fused_ordering(451) 00:11:33.576 fused_ordering(452) 00:11:33.576 fused_ordering(453) 00:11:33.576 fused_ordering(454) 00:11:33.576 fused_ordering(455) 00:11:33.576 fused_ordering(456) 00:11:33.576 fused_ordering(457) 00:11:33.576 fused_ordering(458) 00:11:33.576 fused_ordering(459) 00:11:33.576 fused_ordering(460) 00:11:33.576 fused_ordering(461) 00:11:33.576 fused_ordering(462) 00:11:33.576 fused_ordering(463) 00:11:33.576 fused_ordering(464) 00:11:33.576 fused_ordering(465) 00:11:33.576 fused_ordering(466) 00:11:33.576 fused_ordering(467) 00:11:33.576 fused_ordering(468) 00:11:33.576 fused_ordering(469) 00:11:33.576 fused_ordering(470) 00:11:33.576 fused_ordering(471) 00:11:33.576 fused_ordering(472) 00:11:33.576 fused_ordering(473) 00:11:33.576 fused_ordering(474) 00:11:33.576 fused_ordering(475) 00:11:33.576 fused_ordering(476) 00:11:33.576 fused_ordering(477) 00:11:33.576 fused_ordering(478) 00:11:33.576 fused_ordering(479) 00:11:33.576 fused_ordering(480) 00:11:33.576 fused_ordering(481) 00:11:33.576 fused_ordering(482) 00:11:33.576 fused_ordering(483) 00:11:33.576 fused_ordering(484) 00:11:33.576 fused_ordering(485) 00:11:33.576 fused_ordering(486) 00:11:33.576 fused_ordering(487) 00:11:33.576 fused_ordering(488) 00:11:33.576 fused_ordering(489) 00:11:33.576 fused_ordering(490) 00:11:33.576 fused_ordering(491) 00:11:33.576 fused_ordering(492) 00:11:33.576 fused_ordering(493) 00:11:33.576 fused_ordering(494) 00:11:33.576 fused_ordering(495) 00:11:33.576 fused_ordering(496) 00:11:33.576 fused_ordering(497) 00:11:33.576 fused_ordering(498) 00:11:33.576 fused_ordering(499) 00:11:33.576 fused_ordering(500) 00:11:33.576 fused_ordering(501) 00:11:33.576 fused_ordering(502) 00:11:33.577 fused_ordering(503) 00:11:33.577 fused_ordering(504) 00:11:33.577 fused_ordering(505) 00:11:33.577 fused_ordering(506) 00:11:33.577 fused_ordering(507) 00:11:33.577 fused_ordering(508) 00:11:33.577 fused_ordering(509) 00:11:33.577 fused_ordering(510) 00:11:33.577 fused_ordering(511) 00:11:33.577 fused_ordering(512) 00:11:33.577 fused_ordering(513) 00:11:33.577 fused_ordering(514) 00:11:33.577 fused_ordering(515) 00:11:33.577 fused_ordering(516) 00:11:33.577 fused_ordering(517) 00:11:33.577 fused_ordering(518) 00:11:33.577 fused_ordering(519) 00:11:33.577 fused_ordering(520) 00:11:33.577 fused_ordering(521) 00:11:33.577 fused_ordering(522) 00:11:33.577 fused_ordering(523) 00:11:33.577 fused_ordering(524) 00:11:33.577 fused_ordering(525) 00:11:33.577 fused_ordering(526) 00:11:33.577 fused_ordering(527) 00:11:33.577 fused_ordering(528) 00:11:33.577 fused_ordering(529) 00:11:33.577 fused_ordering(530) 00:11:33.577 fused_ordering(531) 00:11:33.577 fused_ordering(532) 00:11:33.577 fused_ordering(533) 00:11:33.577 fused_ordering(534) 00:11:33.577 fused_ordering(535) 00:11:33.577 fused_ordering(536) 00:11:33.577 fused_ordering(537) 00:11:33.577 fused_ordering(538) 00:11:33.577 fused_ordering(539) 00:11:33.577 fused_ordering(540) 00:11:33.577 fused_ordering(541) 00:11:33.577 fused_ordering(542) 00:11:33.577 fused_ordering(543) 00:11:33.577 fused_ordering(544) 00:11:33.577 fused_ordering(545) 00:11:33.577 fused_ordering(546) 00:11:33.577 fused_ordering(547) 00:11:33.577 fused_ordering(548) 00:11:33.577 fused_ordering(549) 00:11:33.577 fused_ordering(550) 00:11:33.577 fused_ordering(551) 00:11:33.577 fused_ordering(552) 00:11:33.577 fused_ordering(553) 00:11:33.577 fused_ordering(554) 00:11:33.577 fused_ordering(555) 00:11:33.577 fused_ordering(556) 00:11:33.577 fused_ordering(557) 00:11:33.577 fused_ordering(558) 00:11:33.577 fused_ordering(559) 00:11:33.577 fused_ordering(560) 00:11:33.577 fused_ordering(561) 00:11:33.577 fused_ordering(562) 00:11:33.577 fused_ordering(563) 00:11:33.577 fused_ordering(564) 00:11:33.577 fused_ordering(565) 00:11:33.577 fused_ordering(566) 00:11:33.577 fused_ordering(567) 00:11:33.577 fused_ordering(568) 00:11:33.577 fused_ordering(569) 00:11:33.577 fused_ordering(570) 00:11:33.577 fused_ordering(571) 00:11:33.577 fused_ordering(572) 00:11:33.577 fused_ordering(573) 00:11:33.577 fused_ordering(574) 00:11:33.577 fused_ordering(575) 00:11:33.577 fused_ordering(576) 00:11:33.577 fused_ordering(577) 00:11:33.577 fused_ordering(578) 00:11:33.577 fused_ordering(579) 00:11:33.577 fused_ordering(580) 00:11:33.577 fused_ordering(581) 00:11:33.577 fused_ordering(582) 00:11:33.577 fused_ordering(583) 00:11:33.577 fused_ordering(584) 00:11:33.577 fused_ordering(585) 00:11:33.577 fused_ordering(586) 00:11:33.577 fused_ordering(587) 00:11:33.577 fused_ordering(588) 00:11:33.577 fused_ordering(589) 00:11:33.577 fused_ordering(590) 00:11:33.577 fused_ordering(591) 00:11:33.577 fused_ordering(592) 00:11:33.577 fused_ordering(593) 00:11:33.577 fused_ordering(594) 00:11:33.577 fused_ordering(595) 00:11:33.577 fused_ordering(596) 00:11:33.577 fused_ordering(597) 00:11:33.577 fused_ordering(598) 00:11:33.577 fused_ordering(599) 00:11:33.577 fused_ordering(600) 00:11:33.577 fused_ordering(601) 00:11:33.577 fused_ordering(602) 00:11:33.577 fused_ordering(603) 00:11:33.577 fused_ordering(604) 00:11:33.577 fused_ordering(605) 00:11:33.577 fused_ordering(606) 00:11:33.577 fused_ordering(607) 00:11:33.577 fused_ordering(608) 00:11:33.577 fused_ordering(609) 00:11:33.577 fused_ordering(610) 00:11:33.577 fused_ordering(611) 00:11:33.577 fused_ordering(612) 00:11:33.577 fused_ordering(613) 00:11:33.577 fused_ordering(614) 00:11:33.577 fused_ordering(615) 00:11:34.144 fused_ordering(616) 00:11:34.144 fused_ordering(617) 00:11:34.144 fused_ordering(618) 00:11:34.144 fused_ordering(619) 00:11:34.144 fused_ordering(620) 00:11:34.144 fused_ordering(621) 00:11:34.144 fused_ordering(622) 00:11:34.144 fused_ordering(623) 00:11:34.144 fused_ordering(624) 00:11:34.144 fused_ordering(625) 00:11:34.144 fused_ordering(626) 00:11:34.144 fused_ordering(627) 00:11:34.144 fused_ordering(628) 00:11:34.144 fused_ordering(629) 00:11:34.144 fused_ordering(630) 00:11:34.144 fused_ordering(631) 00:11:34.144 fused_ordering(632) 00:11:34.144 fused_ordering(633) 00:11:34.144 fused_ordering(634) 00:11:34.144 fused_ordering(635) 00:11:34.144 fused_ordering(636) 00:11:34.144 fused_ordering(637) 00:11:34.144 fused_ordering(638) 00:11:34.144 fused_ordering(639) 00:11:34.144 fused_ordering(640) 00:11:34.144 fused_ordering(641) 00:11:34.144 fused_ordering(642) 00:11:34.144 fused_ordering(643) 00:11:34.144 fused_ordering(644) 00:11:34.144 fused_ordering(645) 00:11:34.144 fused_ordering(646) 00:11:34.144 fused_ordering(647) 00:11:34.144 fused_ordering(648) 00:11:34.144 fused_ordering(649) 00:11:34.144 fused_ordering(650) 00:11:34.144 fused_ordering(651) 00:11:34.144 fused_ordering(652) 00:11:34.144 fused_ordering(653) 00:11:34.144 fused_ordering(654) 00:11:34.144 fused_ordering(655) 00:11:34.144 fused_ordering(656) 00:11:34.144 fused_ordering(657) 00:11:34.144 fused_ordering(658) 00:11:34.144 fused_ordering(659) 00:11:34.144 fused_ordering(660) 00:11:34.144 fused_ordering(661) 00:11:34.144 fused_ordering(662) 00:11:34.144 fused_ordering(663) 00:11:34.144 fused_ordering(664) 00:11:34.144 fused_ordering(665) 00:11:34.144 fused_ordering(666) 00:11:34.144 fused_ordering(667) 00:11:34.144 fused_ordering(668) 00:11:34.144 fused_ordering(669) 00:11:34.144 fused_ordering(670) 00:11:34.144 fused_ordering(671) 00:11:34.144 fused_ordering(672) 00:11:34.144 fused_ordering(673) 00:11:34.144 fused_ordering(674) 00:11:34.144 fused_ordering(675) 00:11:34.144 fused_ordering(676) 00:11:34.144 fused_ordering(677) 00:11:34.144 fused_ordering(678) 00:11:34.144 fused_ordering(679) 00:11:34.144 fused_ordering(680) 00:11:34.144 fused_ordering(681) 00:11:34.144 fused_ordering(682) 00:11:34.144 fused_ordering(683) 00:11:34.144 fused_ordering(684) 00:11:34.144 fused_ordering(685) 00:11:34.144 fused_ordering(686) 00:11:34.144 fused_ordering(687) 00:11:34.144 fused_ordering(688) 00:11:34.144 fused_ordering(689) 00:11:34.144 fused_ordering(690) 00:11:34.144 fused_ordering(691) 00:11:34.144 fused_ordering(692) 00:11:34.144 fused_ordering(693) 00:11:34.144 fused_ordering(694) 00:11:34.144 fused_ordering(695) 00:11:34.144 fused_ordering(696) 00:11:34.144 fused_ordering(697) 00:11:34.144 fused_ordering(698) 00:11:34.144 fused_ordering(699) 00:11:34.144 fused_ordering(700) 00:11:34.144 fused_ordering(701) 00:11:34.144 fused_ordering(702) 00:11:34.144 fused_ordering(703) 00:11:34.144 fused_ordering(704) 00:11:34.144 fused_ordering(705) 00:11:34.144 fused_ordering(706) 00:11:34.144 fused_ordering(707) 00:11:34.144 fused_ordering(708) 00:11:34.144 fused_ordering(709) 00:11:34.144 fused_ordering(710) 00:11:34.144 fused_ordering(711) 00:11:34.144 fused_ordering(712) 00:11:34.144 fused_ordering(713) 00:11:34.144 fused_ordering(714) 00:11:34.144 fused_ordering(715) 00:11:34.144 fused_ordering(716) 00:11:34.144 fused_ordering(717) 00:11:34.144 fused_ordering(718) 00:11:34.144 fused_ordering(719) 00:11:34.144 fused_ordering(720) 00:11:34.144 fused_ordering(721) 00:11:34.144 fused_ordering(722) 00:11:34.144 fused_ordering(723) 00:11:34.144 fused_ordering(724) 00:11:34.144 fused_ordering(725) 00:11:34.144 fused_ordering(726) 00:11:34.144 fused_ordering(727) 00:11:34.144 fused_ordering(728) 00:11:34.144 fused_ordering(729) 00:11:34.144 fused_ordering(730) 00:11:34.144 fused_ordering(731) 00:11:34.145 fused_ordering(732) 00:11:34.145 fused_ordering(733) 00:11:34.145 fused_ordering(734) 00:11:34.145 fused_ordering(735) 00:11:34.145 fused_ordering(736) 00:11:34.145 fused_ordering(737) 00:11:34.145 fused_ordering(738) 00:11:34.145 fused_ordering(739) 00:11:34.145 fused_ordering(740) 00:11:34.145 fused_ordering(741) 00:11:34.145 fused_ordering(742) 00:11:34.145 fused_ordering(743) 00:11:34.145 fused_ordering(744) 00:11:34.145 fused_ordering(745) 00:11:34.145 fused_ordering(746) 00:11:34.145 fused_ordering(747) 00:11:34.145 fused_ordering(748) 00:11:34.145 fused_ordering(749) 00:11:34.145 fused_ordering(750) 00:11:34.145 fused_ordering(751) 00:11:34.145 fused_ordering(752) 00:11:34.145 fused_ordering(753) 00:11:34.145 fused_ordering(754) 00:11:34.145 fused_ordering(755) 00:11:34.145 fused_ordering(756) 00:11:34.145 fused_ordering(757) 00:11:34.145 fused_ordering(758) 00:11:34.145 fused_ordering(759) 00:11:34.145 fused_ordering(760) 00:11:34.145 fused_ordering(761) 00:11:34.145 fused_ordering(762) 00:11:34.145 fused_ordering(763) 00:11:34.145 fused_ordering(764) 00:11:34.145 fused_ordering(765) 00:11:34.145 fused_ordering(766) 00:11:34.145 fused_ordering(767) 00:11:34.145 fused_ordering(768) 00:11:34.145 fused_ordering(769) 00:11:34.145 fused_ordering(770) 00:11:34.145 fused_ordering(771) 00:11:34.145 fused_ordering(772) 00:11:34.145 fused_ordering(773) 00:11:34.145 fused_ordering(774) 00:11:34.145 fused_ordering(775) 00:11:34.145 fused_ordering(776) 00:11:34.145 fused_ordering(777) 00:11:34.145 fused_ordering(778) 00:11:34.145 fused_ordering(779) 00:11:34.145 fused_ordering(780) 00:11:34.145 fused_ordering(781) 00:11:34.145 fused_ordering(782) 00:11:34.145 fused_ordering(783) 00:11:34.145 fused_ordering(784) 00:11:34.145 fused_ordering(785) 00:11:34.145 fused_ordering(786) 00:11:34.145 fused_ordering(787) 00:11:34.145 fused_ordering(788) 00:11:34.145 fused_ordering(789) 00:11:34.145 fused_ordering(790) 00:11:34.145 fused_ordering(791) 00:11:34.145 fused_ordering(792) 00:11:34.145 fused_ordering(793) 00:11:34.145 fused_ordering(794) 00:11:34.145 fused_ordering(795) 00:11:34.145 fused_ordering(796) 00:11:34.145 fused_ordering(797) 00:11:34.145 fused_ordering(798) 00:11:34.145 fused_ordering(799) 00:11:34.145 fused_ordering(800) 00:11:34.145 fused_ordering(801) 00:11:34.145 fused_ordering(802) 00:11:34.145 fused_ordering(803) 00:11:34.145 fused_ordering(804) 00:11:34.145 fused_ordering(805) 00:11:34.145 fused_ordering(806) 00:11:34.145 fused_ordering(807) 00:11:34.145 fused_ordering(808) 00:11:34.145 fused_ordering(809) 00:11:34.145 fused_ordering(810) 00:11:34.145 fused_ordering(811) 00:11:34.145 fused_ordering(812) 00:11:34.145 fused_ordering(813) 00:11:34.145 fused_ordering(814) 00:11:34.145 fused_ordering(815) 00:11:34.145 fused_ordering(816) 00:11:34.145 fused_ordering(817) 00:11:34.145 fused_ordering(818) 00:11:34.145 fused_ordering(819) 00:11:34.145 fused_ordering(820) 00:11:34.712 fused_ordering(821) 00:11:34.712 fused_ordering(822) 00:11:34.712 fused_ordering(823) 00:11:34.712 fused_ordering(824) 00:11:34.712 fused_ordering(825) 00:11:34.712 fused_ordering(826) 00:11:34.712 fused_ordering(827) 00:11:34.712 fused_ordering(828) 00:11:34.712 fused_ordering(829) 00:11:34.712 fused_ordering(830) 00:11:34.712 fused_ordering(831) 00:11:34.712 fused_ordering(832) 00:11:34.712 fused_ordering(833) 00:11:34.712 fused_ordering(834) 00:11:34.712 fused_ordering(835) 00:11:34.712 fused_ordering(836) 00:11:34.712 fused_ordering(837) 00:11:34.712 fused_ordering(838) 00:11:34.712 fused_ordering(839) 00:11:34.712 fused_ordering(840) 00:11:34.712 fused_ordering(841) 00:11:34.712 fused_ordering(842) 00:11:34.712 fused_ordering(843) 00:11:34.712 fused_ordering(844) 00:11:34.712 fused_ordering(845) 00:11:34.712 fused_ordering(846) 00:11:34.712 fused_ordering(847) 00:11:34.712 fused_ordering(848) 00:11:34.712 fused_ordering(849) 00:11:34.712 fused_ordering(850) 00:11:34.712 fused_ordering(851) 00:11:34.712 fused_ordering(852) 00:11:34.712 fused_ordering(853) 00:11:34.712 fused_ordering(854) 00:11:34.712 fused_ordering(855) 00:11:34.712 fused_ordering(856) 00:11:34.712 fused_ordering(857) 00:11:34.712 fused_ordering(858) 00:11:34.712 fused_ordering(859) 00:11:34.712 fused_ordering(860) 00:11:34.712 fused_ordering(861) 00:11:34.712 fused_ordering(862) 00:11:34.712 fused_ordering(863) 00:11:34.712 fused_ordering(864) 00:11:34.712 fused_ordering(865) 00:11:34.712 fused_ordering(866) 00:11:34.712 fused_ordering(867) 00:11:34.712 fused_ordering(868) 00:11:34.712 fused_ordering(869) 00:11:34.712 fused_ordering(870) 00:11:34.712 fused_ordering(871) 00:11:34.712 fused_ordering(872) 00:11:34.712 fused_ordering(873) 00:11:34.712 fused_ordering(874) 00:11:34.712 fused_ordering(875) 00:11:34.712 fused_ordering(876) 00:11:34.712 fused_ordering(877) 00:11:34.712 fused_ordering(878) 00:11:34.712 fused_ordering(879) 00:11:34.712 fused_ordering(880) 00:11:34.712 fused_ordering(881) 00:11:34.712 fused_ordering(882) 00:11:34.712 fused_ordering(883) 00:11:34.712 fused_ordering(884) 00:11:34.712 fused_ordering(885) 00:11:34.712 fused_ordering(886) 00:11:34.712 fused_ordering(887) 00:11:34.712 fused_ordering(888) 00:11:34.712 fused_ordering(889) 00:11:34.712 fused_ordering(890) 00:11:34.712 fused_ordering(891) 00:11:34.712 fused_ordering(892) 00:11:34.712 fused_ordering(893) 00:11:34.712 fused_ordering(894) 00:11:34.712 fused_ordering(895) 00:11:34.712 fused_ordering(896) 00:11:34.712 fused_ordering(897) 00:11:34.712 fused_ordering(898) 00:11:34.712 fused_ordering(899) 00:11:34.712 fused_ordering(900) 00:11:34.712 fused_ordering(901) 00:11:34.712 fused_ordering(902) 00:11:34.712 fused_ordering(903) 00:11:34.712 fused_ordering(904) 00:11:34.712 fused_ordering(905) 00:11:34.712 fused_ordering(906) 00:11:34.712 fused_ordering(907) 00:11:34.712 fused_ordering(908) 00:11:34.712 fused_ordering(909) 00:11:34.712 fused_ordering(910) 00:11:34.712 fused_ordering(911) 00:11:34.712 fused_ordering(912) 00:11:34.712 fused_ordering(913) 00:11:34.712 fused_ordering(914) 00:11:34.712 fused_ordering(915) 00:11:34.712 fused_ordering(916) 00:11:34.712 fused_ordering(917) 00:11:34.712 fused_ordering(918) 00:11:34.712 fused_ordering(919) 00:11:34.713 fused_ordering(920) 00:11:34.713 fused_ordering(921) 00:11:34.713 fused_ordering(922) 00:11:34.713 fused_ordering(923) 00:11:34.713 fused_ordering(924) 00:11:34.713 fused_ordering(925) 00:11:34.713 fused_ordering(926) 00:11:34.713 fused_ordering(927) 00:11:34.713 fused_ordering(928) 00:11:34.713 fused_ordering(929) 00:11:34.713 fused_ordering(930) 00:11:34.713 fused_ordering(931) 00:11:34.713 fused_ordering(932) 00:11:34.713 fused_ordering(933) 00:11:34.713 fused_ordering(934) 00:11:34.713 fused_ordering(935) 00:11:34.713 fused_ordering(936) 00:11:34.713 fused_ordering(937) 00:11:34.713 fused_ordering(938) 00:11:34.713 fused_ordering(939) 00:11:34.713 fused_ordering(940) 00:11:34.713 fused_ordering(941) 00:11:34.713 fused_ordering(942) 00:11:34.713 fused_ordering(943) 00:11:34.713 fused_ordering(944) 00:11:34.713 fused_ordering(945) 00:11:34.713 fused_ordering(946) 00:11:34.713 fused_ordering(947) 00:11:34.713 fused_ordering(948) 00:11:34.713 fused_ordering(949) 00:11:34.713 fused_ordering(950) 00:11:34.713 fused_ordering(951) 00:11:34.713 fused_ordering(952) 00:11:34.713 fused_ordering(953) 00:11:34.713 fused_ordering(954) 00:11:34.713 fused_ordering(955) 00:11:34.713 fused_ordering(956) 00:11:34.713 fused_ordering(957) 00:11:34.713 fused_ordering(958) 00:11:34.713 fused_ordering(959) 00:11:34.713 fused_ordering(960) 00:11:34.713 fused_ordering(961) 00:11:34.713 fused_ordering(962) 00:11:34.713 fused_ordering(963) 00:11:34.713 fused_ordering(964) 00:11:34.713 fused_ordering(965) 00:11:34.713 fused_ordering(966) 00:11:34.713 fused_ordering(967) 00:11:34.713 fused_ordering(968) 00:11:34.713 fused_ordering(969) 00:11:34.713 fused_ordering(970) 00:11:34.713 fused_ordering(971) 00:11:34.713 fused_ordering(972) 00:11:34.713 fused_ordering(973) 00:11:34.713 fused_ordering(974) 00:11:34.713 fused_ordering(975) 00:11:34.713 fused_ordering(976) 00:11:34.713 fused_ordering(977) 00:11:34.713 fused_ordering(978) 00:11:34.713 fused_ordering(979) 00:11:34.713 fused_ordering(980) 00:11:34.713 fused_ordering(981) 00:11:34.713 fused_ordering(982) 00:11:34.713 fused_ordering(983) 00:11:34.713 fused_ordering(984) 00:11:34.713 fused_ordering(985) 00:11:34.713 fused_ordering(986) 00:11:34.713 fused_ordering(987) 00:11:34.713 fused_ordering(988) 00:11:34.713 fused_ordering(989) 00:11:34.713 fused_ordering(990) 00:11:34.713 fused_ordering(991) 00:11:34.713 fused_ordering(992) 00:11:34.713 fused_ordering(993) 00:11:34.713 fused_ordering(994) 00:11:34.713 fused_ordering(995) 00:11:34.713 fused_ordering(996) 00:11:34.713 fused_ordering(997) 00:11:34.713 fused_ordering(998) 00:11:34.713 fused_ordering(999) 00:11:34.713 fused_ordering(1000) 00:11:34.713 fused_ordering(1001) 00:11:34.713 fused_ordering(1002) 00:11:34.713 fused_ordering(1003) 00:11:34.713 fused_ordering(1004) 00:11:34.713 fused_ordering(1005) 00:11:34.713 fused_ordering(1006) 00:11:34.713 fused_ordering(1007) 00:11:34.713 fused_ordering(1008) 00:11:34.713 fused_ordering(1009) 00:11:34.713 fused_ordering(1010) 00:11:34.713 fused_ordering(1011) 00:11:34.713 fused_ordering(1012) 00:11:34.713 fused_ordering(1013) 00:11:34.713 fused_ordering(1014) 00:11:34.713 fused_ordering(1015) 00:11:34.713 fused_ordering(1016) 00:11:34.713 fused_ordering(1017) 00:11:34.713 fused_ordering(1018) 00:11:34.713 fused_ordering(1019) 00:11:34.713 fused_ordering(1020) 00:11:34.713 fused_ordering(1021) 00:11:34.713 fused_ordering(1022) 00:11:34.713 fused_ordering(1023) 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.713 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.713 rmmod nvme_tcp 00:11:34.713 rmmod nvme_fabrics 00:11:34.972 rmmod nvme_keyring 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2936820 ']' 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2936820 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2936820 ']' 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2936820 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2936820 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2936820' 00:11:34.972 killing process with pid 2936820 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2936820 00:11:34.972 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2936820 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.230 00:37:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.357 00:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.357 00:11:37.357 real 0m12.159s 00:11:37.357 user 0m7.242s 00:11:37.357 sys 0m6.225s 00:11:37.357 00:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.357 00:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.357 ************************************ 00:11:37.357 END TEST nvmf_fused_ordering 00:11:37.357 ************************************ 00:11:37.357 00:37:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.357 00:37:55 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:37.357 00:37:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.357 00:37:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.357 00:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.357 ************************************ 00:11:37.357 START TEST nvmf_delete_subsystem 00:11:37.357 ************************************ 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:37.357 * Looking for test storage... 00:11:37.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.357 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.358 00:37:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:43.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:43.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:43.922 Found net devices under 0000:af:00.0: cvl_0_0 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:43.922 Found net devices under 0000:af:00.1: cvl_0_1 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:11:43.922 00:11:43.922 --- 10.0.0.2 ping statistics --- 00:11:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.922 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:43.922 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:11:43.922 00:11:43.922 --- 10.0.0.1 ping statistics --- 00:11:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.923 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2941264 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2941264 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2941264 ']' 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.923 00:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 [2024-07-16 00:38:00.998395] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:11:43.923 [2024-07-16 00:38:00.998450] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.923 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.923 [2024-07-16 00:38:01.079496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.923 [2024-07-16 00:38:01.170502] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.923 [2024-07-16 00:38:01.170545] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.923 [2024-07-16 00:38:01.170555] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.923 [2024-07-16 00:38:01.170564] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.923 [2024-07-16 00:38:01.170571] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.923 [2024-07-16 00:38:01.174284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.923 [2024-07-16 00:38:01.174288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 [2024-07-16 00:38:01.327030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 [2024-07-16 00:38:01.347592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 NULL1 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 Delay0 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2941285 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:43.923 00:38:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:43.923 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.923 [2024-07-16 00:38:01.448735] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:45.827 00:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.827 00:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.827 00:38:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Write completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 starting I/O failed: -6 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.827 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 [2024-07-16 00:38:03.548564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e5800d020 is same with the state(5) to be set 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 Write completed with error (sct=0, sc=8) 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 starting I/O failed: -6 00:11:45.828 Read completed with error (sct=0, sc=8) 00:11:45.828 [2024-07-16 00:38:03.550393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119bb50 is same with the state(5) to be set 00:11:46.765 [2024-07-16 00:38:04.506732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a6e0 is same with the state(5) to be set 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Write completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.765 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 [2024-07-16 00:38:04.549631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b6c0 is same with the state(5) to be set 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 [2024-07-16 00:38:04.549894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9e5800d370 is same with the state(5) to be set 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 [2024-07-16 00:38:04.550406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119b310 is same with the state(5) to be set 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Write completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 Read completed with error (sct=0, sc=8) 00:11:46.766 [2024-07-16 00:38:04.551082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119bea0 is same with the state(5) to be set 00:11:46.766 Initializing NVMe Controllers 00:11:46.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.766 Controller IO queue size 128, less than required. 00:11:46.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.766 Initialization complete. Launching workers. 00:11:46.766 ======================================================== 00:11:46.766 Latency(us) 00:11:46.766 Device Information : IOPS MiB/s Average min max 00:11:46.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.74 0.10 946090.51 4054.08 1018932.79 00:11:46.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.23 0.07 889339.34 929.39 1019845.69 00:11:46.766 ======================================================== 00:11:46.766 Total : 346.97 0.17 921191.14 929.39 1019845.69 00:11:46.766 00:11:46.766 [2024-07-16 00:38:04.552004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117a6e0 (9): Bad file descriptor 00:11:46.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:46.766 00:38:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.766 00:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:46.766 00:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2941285 00:11:46.766 00:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2941285 00:11:47.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2941285) - No such process 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2941285 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2941285 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2941285 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.334 [2024-07-16 00:38:05.077250] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2942006 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:47.334 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.334 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.334 [2024-07-16 00:38:05.163974] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:47.903 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.903 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:47.903 00:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.471 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.471 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:48.472 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.041 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.041 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:49.041 00:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.300 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.300 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:49.300 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.869 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.869 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:49.869 00:38:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.437 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.437 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:50.437 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.696 Initializing NVMe Controllers 00:11:50.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:50.696 Controller IO queue size 128, less than required. 00:11:50.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:50.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:50.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:50.696 Initialization complete. Launching workers. 00:11:50.696 ======================================================== 00:11:50.696 Latency(us) 00:11:50.696 Device Information : IOPS MiB/s Average min max 00:11:50.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005687.46 1000283.70 1041027.00 00:11:50.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006205.14 1000289.28 1040834.31 00:11:50.696 ======================================================== 00:11:50.696 Total : 256.00 0.12 1005946.30 1000283.70 1041027.00 00:11:50.696 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2942006 00:11:50.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2942006) - No such process 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2942006 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.956 rmmod nvme_tcp 00:11:50.956 rmmod nvme_fabrics 00:11:50.956 rmmod nvme_keyring 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2941264 ']' 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2941264 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2941264 ']' 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2941264 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2941264 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2941264' 00:11:50.956 killing process with pid 2941264 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2941264 00:11:50.956 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2941264 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.216 00:38:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.752 00:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.752 00:11:53.752 real 0m15.948s 00:11:53.752 user 0m29.303s 00:11:53.752 sys 0m5.339s 00:11:53.752 00:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.753 00:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.753 ************************************ 00:11:53.753 END TEST nvmf_delete_subsystem 00:11:53.753 ************************************ 00:11:53.753 00:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:53.753 00:38:11 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:53.753 00:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:53.753 00:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.753 00:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:53.753 ************************************ 00:11:53.753 START TEST nvmf_ns_masking 00:11:53.753 ************************************ 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:53.753 * Looking for test storage... 00:11:53.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=891796d9-dac4-4eb6-ae44-fdbe1e055610 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dfcba3c2-4c72-4a97-a818-0337490aacbc 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8075a7bd-3205-4055-be65-3cd6206705b7 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.753 00:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:59.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.036 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:59.037 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:59.037 Found net devices under 0000:af:00.0: cvl_0_0 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:59.037 Found net devices under 0000:af:00.1: cvl_0_1 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.037 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.294 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.294 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.294 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.294 00:38:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.294 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.294 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.294 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:59.294 00:11:59.294 --- 10.0.0.2 ping statistics --- 00:11:59.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.294 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:59.294 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:11:59.294 00:11:59.294 --- 10.0.0.1 ping statistics --- 00:11:59.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.295 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2946339 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2946339 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2946339 ']' 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.295 00:38:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.552 [2024-07-16 00:38:17.139467] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:11:59.552 [2024-07-16 00:38:17.139522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.552 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.552 [2024-07-16 00:38:17.228633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.552 [2024-07-16 00:38:17.318327] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.552 [2024-07-16 00:38:17.318368] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.552 [2024-07-16 00:38:17.318379] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.552 [2024-07-16 00:38:17.318387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.552 [2024-07-16 00:38:17.318395] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.552 [2024-07-16 00:38:17.318417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.488 00:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.489 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:00.747 [2024-07-16 00:38:18.339786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.747 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:00.747 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:00.747 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:01.004 Malloc1 00:12:01.004 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:01.262 Malloc2 00:12:01.262 00:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.521 00:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:01.780 00:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.780 [2024-07-16 00:38:19.589678] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.780 00:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:01.780 00:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8075a7bd-3205-4055-be65-3cd6206705b7 -a 10.0.0.2 -s 4420 -i 4 00:12:02.040 00:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.040 00:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.040 00:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.040 00:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:02.040 00:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.596 [ 0]:0x1 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89b9e70211bb480ab9987b8f95285de8 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89b9e70211bb480ab9987b8f95285de8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.596 00:38:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.596 [ 0]:0x1 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89b9e70211bb480ab9987b8f95285de8 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89b9e70211bb480ab9987b8f95285de8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.596 [ 1]:0x2 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.596 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.894 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8075a7bd-3205-4055-be65-3cd6206705b7 -a 10.0.0.2 -s 4420 -i 4 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:05.153 00:38:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:07.684 00:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:07.684 00:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:07.684 00:38:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.684 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:07.684 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.685 [ 0]:0x2 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.685 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:07.685 [ 0]:0x1 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89b9e70211bb480ab9987b8f95285de8 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89b9e70211bb480ab9987b8f95285de8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:07.943 [ 1]:0x2 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:07.943 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.202 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.203 [ 0]:0x2 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.203 00:38:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.203 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:08.203 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.203 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:08.203 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:08.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.461 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8075a7bd-3205-4055-be65-3cd6206705b7 -a 10.0.0.2 -s 4420 -i 4 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:08.720 00:38:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.256 [ 0]:0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89b9e70211bb480ab9987b8f95285de8 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89b9e70211bb480ab9987b8f95285de8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.256 [ 1]:0x2 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.256 00:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.256 [ 0]:0x2 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:11.256 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:11.516 [2024-07-16 00:38:29.291090] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:11.516 request: 00:12:11.516 { 00:12:11.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.516 "nsid": 2, 00:12:11.516 "host": "nqn.2016-06.io.spdk:host1", 00:12:11.516 "method": "nvmf_ns_remove_host", 00:12:11.516 "req_id": 1 00:12:11.516 } 00:12:11.516 Got JSON-RPC error response 00:12:11.516 response: 00:12:11.516 { 00:12:11.516 "code": -32602, 00:12:11.516 "message": "Invalid parameters" 00:12:11.516 } 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.516 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.775 [ 0]:0x2 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc8e861a5864046a68e034af1221688 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc8e861a5864046a68e034af1221688 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:11.775 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2948606 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2948606 /var/tmp/host.sock 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2948606 ']' 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:12.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.034 00:38:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.034 [2024-07-16 00:38:29.689115] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:12:12.034 [2024-07-16 00:38:29.689172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948606 ] 00:12:12.034 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.034 [2024-07-16 00:38:29.772102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.294 [2024-07-16 00:38:29.876349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.862 00:38:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.862 00:38:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:12.862 00:38:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.121 00:38:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:13.380 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 891796d9-dac4-4eb6-ae44-fdbe1e055610 00:12:13.380 00:38:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:13.380 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 891796D9DAC44EB6AE44FDBE1E055610 -i 00:12:13.639 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dfcba3c2-4c72-4a97-a818-0337490aacbc 00:12:13.639 00:38:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:13.639 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DFCBA3C24C724A97A8180337490AACBC -i 00:12:13.898 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:14.157 00:38:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:14.416 00:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:14.416 00:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:14.984 nvme0n1 00:12:14.984 00:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:14.984 00:38:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:15.242 nvme1n2 00:12:15.242 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:15.242 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:15.242 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:15.243 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:15.243 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:15.501 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:15.501 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:15.501 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:15.501 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:15.761 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 891796d9-dac4-4eb6-ae44-fdbe1e055610 == \8\9\1\7\9\6\d\9\-\d\a\c\4\-\4\e\b\6\-\a\e\4\4\-\f\d\b\e\1\e\0\5\5\6\1\0 ]] 00:12:15.761 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:15.761 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:15.761 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:16.020 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dfcba3c2-4c72-4a97-a818-0337490aacbc == \d\f\c\b\a\3\c\2\-\4\c\7\2\-\4\a\9\7\-\a\8\1\8\-\0\3\3\7\4\9\0\a\a\c\b\c ]] 00:12:16.020 00:38:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2948606 00:12:16.020 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2948606 ']' 00:12:16.020 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2948606 00:12:16.020 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2948606 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2948606' 00:12:16.279 killing process with pid 2948606 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2948606 00:12:16.279 00:38:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2948606 00:12:16.537 00:38:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.796 rmmod nvme_tcp 00:12:16.796 rmmod nvme_fabrics 00:12:16.796 rmmod nvme_keyring 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2946339 ']' 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2946339 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2946339 ']' 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2946339 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.796 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2946339 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2946339' 00:12:17.055 killing process with pid 2946339 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2946339 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2946339 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.055 00:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.604 00:38:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.604 00:12:19.604 real 0m25.845s 00:12:19.604 user 0m30.464s 00:12:19.604 sys 0m6.876s 00:12:19.604 00:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.604 00:38:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:19.604 ************************************ 00:12:19.604 END TEST nvmf_ns_masking 00:12:19.604 ************************************ 00:12:19.604 00:38:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.604 00:38:36 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:19.604 00:38:36 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:19.604 00:38:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.604 00:38:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.604 00:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.604 ************************************ 00:12:19.604 START TEST nvmf_nvme_cli 00:12:19.604 ************************************ 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:19.604 * Looking for test storage... 00:12:19.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:19.604 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.605 00:38:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:24.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:24.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:24.881 Found net devices under 0000:af:00.0: cvl_0_0 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.881 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:24.881 Found net devices under 0000:af:00.1: cvl_0_1 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.882 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.141 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:12:25.142 00:12:25.142 --- 10.0.0.2 ping statistics --- 00:12:25.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.142 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:12:25.142 00:12:25.142 --- 10.0.0.1 ping statistics --- 00:12:25.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.142 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2953146 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2953146 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2953146 ']' 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.142 00:38:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.401 [2024-07-16 00:38:43.025203] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:12:25.401 [2024-07-16 00:38:43.025284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.401 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.401 [2024-07-16 00:38:43.125890] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.401 [2024-07-16 00:38:43.221069] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.401 [2024-07-16 00:38:43.221111] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.401 [2024-07-16 00:38:43.221121] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.401 [2024-07-16 00:38:43.221129] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.401 [2024-07-16 00:38:43.221136] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.401 [2024-07-16 00:38:43.221187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.401 [2024-07-16 00:38:43.221300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.401 [2024-07-16 00:38:43.221345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.401 [2024-07-16 00:38:43.221345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.337 00:38:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.337 00:38:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:26.337 00:38:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.337 00:38:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.337 00:38:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 [2024-07-16 00:38:44.012283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 Malloc0 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 Malloc1 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 [2024-07-16 00:38:44.102596] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:26.337 00:38:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.338 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:26.596 00:12:26.596 Discovery Log Number of Records 2, Generation counter 2 00:12:26.596 =====Discovery Log Entry 0====== 00:12:26.596 trtype: tcp 00:12:26.596 adrfam: ipv4 00:12:26.596 subtype: current discovery subsystem 00:12:26.596 treq: not required 00:12:26.596 portid: 0 00:12:26.596 trsvcid: 4420 00:12:26.596 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:26.596 traddr: 10.0.0.2 00:12:26.596 eflags: explicit discovery connections, duplicate discovery information 00:12:26.596 sectype: none 00:12:26.596 =====Discovery Log Entry 1====== 00:12:26.596 trtype: tcp 00:12:26.596 adrfam: ipv4 00:12:26.596 subtype: nvme subsystem 00:12:26.596 treq: not required 00:12:26.596 portid: 0 00:12:26.596 trsvcid: 4420 00:12:26.596 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:26.596 traddr: 10.0.0.2 00:12:26.596 eflags: none 00:12:26.596 sectype: none 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.596 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:26.597 00:38:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:26.597 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:26.597 00:38:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:27.974 00:38:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:29.879 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:30.138 /dev/nvme0n1 ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:30.138 00:38:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.398 00:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.398 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:30.398 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:30.398 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.657 rmmod nvme_tcp 00:12:30.657 rmmod nvme_fabrics 00:12:30.657 rmmod nvme_keyring 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2953146 ']' 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2953146 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2953146 ']' 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2953146 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2953146 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2953146' 00:12:30.657 killing process with pid 2953146 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2953146 00:12:30.657 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2953146 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.916 00:38:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.455 00:38:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.455 00:12:33.455 real 0m13.693s 00:12:33.455 user 0m23.184s 00:12:33.455 sys 0m5.105s 00:12:33.455 00:38:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.455 00:38:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.455 ************************************ 00:12:33.455 END TEST nvmf_nvme_cli 00:12:33.455 ************************************ 00:12:33.455 00:38:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:33.455 00:38:50 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:33.455 00:38:50 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:33.455 00:38:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.455 00:38:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.455 00:38:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.455 ************************************ 00:12:33.455 START TEST nvmf_vfio_user 00:12:33.455 ************************************ 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:33.455 * Looking for test storage... 00:12:33.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2954842 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2954842' 00:12:33.455 Process pid: 2954842 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2954842 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2954842 ']' 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.455 00:38:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:33.455 [2024-07-16 00:38:50.968800] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:12:33.455 [2024-07-16 00:38:50.968860] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.455 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.455 [2024-07-16 00:38:51.053604] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.455 [2024-07-16 00:38:51.149176] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.455 [2024-07-16 00:38:51.149218] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.456 [2024-07-16 00:38:51.149228] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.456 [2024-07-16 00:38:51.149237] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.456 [2024-07-16 00:38:51.149246] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.456 [2024-07-16 00:38:51.149304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.456 [2024-07-16 00:38:51.149343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.456 [2024-07-16 00:38:51.149456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.456 [2024-07-16 00:38:51.149455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.392 00:38:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.392 00:38:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:34.392 00:38:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:35.327 00:38:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:35.588 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:35.588 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:35.588 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.588 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:35.588 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:35.848 Malloc1 00:12:35.848 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:36.105 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:36.105 00:38:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:36.363 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:36.363 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:36.363 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:36.622 Malloc2 00:12:36.622 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:36.880 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:37.139 00:38:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:37.397 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:37.656 [2024-07-16 00:38:55.252214] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:12:37.656 [2024-07-16 00:38:55.252249] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955621 ] 00:12:37.656 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.656 [2024-07-16 00:38:55.291209] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:37.656 [2024-07-16 00:38:55.293736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.656 [2024-07-16 00:38:55.293761] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb53dca6000 00:12:37.656 [2024-07-16 00:38:55.294738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.295739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.296749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.297762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.298773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.299780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.300792] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.301803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:37.656 [2024-07-16 00:38:55.302822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:37.656 [2024-07-16 00:38:55.302834] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb53dc9b000 00:12:37.656 [2024-07-16 00:38:55.304426] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.656 [2024-07-16 00:38:55.326206] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:37.656 [2024-07-16 00:38:55.326234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:37.656 [2024-07-16 00:38:55.332057] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:37.656 [2024-07-16 00:38:55.332112] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:37.656 [2024-07-16 00:38:55.332211] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:37.656 [2024-07-16 00:38:55.332234] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:37.656 [2024-07-16 00:38:55.332242] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:37.656 [2024-07-16 00:38:55.333054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:37.656 [2024-07-16 00:38:55.333067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:37.656 [2024-07-16 00:38:55.333076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:37.657 [2024-07-16 00:38:55.334061] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:37.657 [2024-07-16 00:38:55.334073] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:37.657 [2024-07-16 00:38:55.334082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.335072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:37.657 [2024-07-16 00:38:55.335083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.336078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:37.657 [2024-07-16 00:38:55.336089] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:37.657 [2024-07-16 00:38:55.336095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.336104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.336211] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:37.657 [2024-07-16 00:38:55.336218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.336224] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:37.657 [2024-07-16 00:38:55.337086] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:37.657 [2024-07-16 00:38:55.338090] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:37.657 [2024-07-16 00:38:55.339095] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:37.657 [2024-07-16 00:38:55.340100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.657 [2024-07-16 00:38:55.340225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:37.657 [2024-07-16 00:38:55.341111] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:37.657 [2024-07-16 00:38:55.341122] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:37.657 [2024-07-16 00:38:55.341129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:37.657 [2024-07-16 00:38:55.341166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341185] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.657 [2024-07-16 00:38:55.341191] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.657 [2024-07-16 00:38:55.341207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341297] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:37.657 [2024-07-16 00:38:55.341303] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:37.657 [2024-07-16 00:38:55.341308] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:37.657 [2024-07-16 00:38:55.341314] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:37.657 [2024-07-16 00:38:55.341320] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:37.657 [2024-07-16 00:38:55.341326] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:37.657 [2024-07-16 00:38:55.341332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.657 [2024-07-16 00:38:55.341402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.657 [2024-07-16 00:38:55.341413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.657 [2024-07-16 00:38:55.341423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.657 [2024-07-16 00:38:55.341429] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341480] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:37.657 [2024-07-16 00:38:55.341491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341638] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:37.657 [2024-07-16 00:38:55.341643] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:37.657 [2024-07-16 00:38:55.341652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341692] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:37.657 [2024-07-16 00:38:55.341705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341724] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.657 [2024-07-16 00:38:55.341730] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.657 [2024-07-16 00:38:55.341738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341805] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:37.657 [2024-07-16 00:38:55.341810] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.657 [2024-07-16 00:38:55.341817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341893] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:37.657 [2024-07-16 00:38:55.341899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:37.657 [2024-07-16 00:38:55.341905] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:37.657 [2024-07-16 00:38:55.341925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.341976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.341989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:37.657 [2024-07-16 00:38:55.342009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:37.657 [2024-07-16 00:38:55.342023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:37.658 [2024-07-16 00:38:55.342038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:37.658 [2024-07-16 00:38:55.342054] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:37.658 [2024-07-16 00:38:55.342060] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:37.658 [2024-07-16 00:38:55.342065] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:37.658 [2024-07-16 00:38:55.342069] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:37.658 [2024-07-16 00:38:55.342077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:37.658 [2024-07-16 00:38:55.342086] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:37.658 [2024-07-16 00:38:55.342092] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:37.658 [2024-07-16 00:38:55.342100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:37.658 [2024-07-16 00:38:55.342108] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:37.658 [2024-07-16 00:38:55.342114] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:37.658 [2024-07-16 00:38:55.342122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:37.658 [2024-07-16 00:38:55.342133] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:37.658 [2024-07-16 00:38:55.342138] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:37.658 [2024-07-16 00:38:55.342146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:37.658 [2024-07-16 00:38:55.342155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:37.658 [2024-07-16 00:38:55.342170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:37.658 [2024-07-16 00:38:55.342184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:37.658 [2024-07-16 00:38:55.342193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:37.658 ===================================================== 00:12:37.658 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.658 ===================================================== 00:12:37.658 Controller Capabilities/Features 00:12:37.658 ================================ 00:12:37.658 Vendor ID: 4e58 00:12:37.658 Subsystem Vendor ID: 4e58 00:12:37.658 Serial Number: SPDK1 00:12:37.658 Model Number: SPDK bdev Controller 00:12:37.658 Firmware Version: 24.09 00:12:37.658 Recommended Arb Burst: 6 00:12:37.658 IEEE OUI Identifier: 8d 6b 50 00:12:37.658 Multi-path I/O 00:12:37.658 May have multiple subsystem ports: Yes 00:12:37.658 May have multiple controllers: Yes 00:12:37.658 Associated with SR-IOV VF: No 00:12:37.658 Max Data Transfer Size: 131072 00:12:37.658 Max Number of Namespaces: 32 00:12:37.658 Max Number of I/O Queues: 127 00:12:37.658 NVMe Specification Version (VS): 1.3 00:12:37.658 NVMe Specification Version (Identify): 1.3 00:12:37.658 Maximum Queue Entries: 256 00:12:37.658 Contiguous Queues Required: Yes 00:12:37.658 Arbitration Mechanisms Supported 00:12:37.658 Weighted Round Robin: Not Supported 00:12:37.658 Vendor Specific: Not Supported 00:12:37.658 Reset Timeout: 15000 ms 00:12:37.658 Doorbell Stride: 4 bytes 00:12:37.658 NVM Subsystem Reset: Not Supported 00:12:37.658 Command Sets Supported 00:12:37.658 NVM Command Set: Supported 00:12:37.658 Boot Partition: Not Supported 00:12:37.658 Memory Page Size Minimum: 4096 bytes 00:12:37.658 Memory Page Size Maximum: 4096 bytes 00:12:37.658 Persistent Memory Region: Not Supported 00:12:37.658 Optional Asynchronous Events Supported 00:12:37.658 Namespace Attribute Notices: Supported 00:12:37.658 Firmware Activation Notices: Not Supported 00:12:37.658 ANA Change Notices: Not Supported 00:12:37.658 PLE Aggregate Log Change Notices: Not Supported 00:12:37.658 LBA Status Info Alert Notices: Not Supported 00:12:37.658 EGE Aggregate Log Change Notices: Not Supported 00:12:37.658 Normal NVM Subsystem Shutdown event: Not Supported 00:12:37.658 Zone Descriptor Change Notices: Not Supported 00:12:37.658 Discovery Log Change Notices: Not Supported 00:12:37.658 Controller Attributes 00:12:37.658 128-bit Host Identifier: Supported 00:12:37.658 Non-Operational Permissive Mode: Not Supported 00:12:37.658 NVM Sets: Not Supported 00:12:37.658 Read Recovery Levels: Not Supported 00:12:37.658 Endurance Groups: Not Supported 00:12:37.658 Predictable Latency Mode: Not Supported 00:12:37.658 Traffic Based Keep ALive: Not Supported 00:12:37.658 Namespace Granularity: Not Supported 00:12:37.658 SQ Associations: Not Supported 00:12:37.658 UUID List: Not Supported 00:12:37.658 Multi-Domain Subsystem: Not Supported 00:12:37.658 Fixed Capacity Management: Not Supported 00:12:37.658 Variable Capacity Management: Not Supported 00:12:37.658 Delete Endurance Group: Not Supported 00:12:37.658 Delete NVM Set: Not Supported 00:12:37.658 Extended LBA Formats Supported: Not Supported 00:12:37.658 Flexible Data Placement Supported: Not Supported 00:12:37.658 00:12:37.658 Controller Memory Buffer Support 00:12:37.658 ================================ 00:12:37.658 Supported: No 00:12:37.658 00:12:37.658 Persistent Memory Region Support 00:12:37.658 ================================ 00:12:37.658 Supported: No 00:12:37.658 00:12:37.658 Admin Command Set Attributes 00:12:37.658 ============================ 00:12:37.658 Security Send/Receive: Not Supported 00:12:37.658 Format NVM: Not Supported 00:12:37.658 Firmware Activate/Download: Not Supported 00:12:37.658 Namespace Management: Not Supported 00:12:37.658 Device Self-Test: Not Supported 00:12:37.658 Directives: Not Supported 00:12:37.658 NVMe-MI: Not Supported 00:12:37.658 Virtualization Management: Not Supported 00:12:37.658 Doorbell Buffer Config: Not Supported 00:12:37.658 Get LBA Status Capability: Not Supported 00:12:37.658 Command & Feature Lockdown Capability: Not Supported 00:12:37.658 Abort Command Limit: 4 00:12:37.658 Async Event Request Limit: 4 00:12:37.658 Number of Firmware Slots: N/A 00:12:37.658 Firmware Slot 1 Read-Only: N/A 00:12:37.658 Firmware Activation Without Reset: N/A 00:12:37.658 Multiple Update Detection Support: N/A 00:12:37.658 Firmware Update Granularity: No Information Provided 00:12:37.658 Per-Namespace SMART Log: No 00:12:37.658 Asymmetric Namespace Access Log Page: Not Supported 00:12:37.658 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:37.658 Command Effects Log Page: Supported 00:12:37.658 Get Log Page Extended Data: Supported 00:12:37.658 Telemetry Log Pages: Not Supported 00:12:37.658 Persistent Event Log Pages: Not Supported 00:12:37.658 Supported Log Pages Log Page: May Support 00:12:37.658 Commands Supported & Effects Log Page: Not Supported 00:12:37.658 Feature Identifiers & Effects Log Page:May Support 00:12:37.658 NVMe-MI Commands & Effects Log Page: May Support 00:12:37.658 Data Area 4 for Telemetry Log: Not Supported 00:12:37.658 Error Log Page Entries Supported: 128 00:12:37.658 Keep Alive: Supported 00:12:37.658 Keep Alive Granularity: 10000 ms 00:12:37.658 00:12:37.658 NVM Command Set Attributes 00:12:37.658 ========================== 00:12:37.658 Submission Queue Entry Size 00:12:37.658 Max: 64 00:12:37.658 Min: 64 00:12:37.658 Completion Queue Entry Size 00:12:37.658 Max: 16 00:12:37.658 Min: 16 00:12:37.658 Number of Namespaces: 32 00:12:37.658 Compare Command: Supported 00:12:37.658 Write Uncorrectable Command: Not Supported 00:12:37.658 Dataset Management Command: Supported 00:12:37.658 Write Zeroes Command: Supported 00:12:37.658 Set Features Save Field: Not Supported 00:12:37.658 Reservations: Not Supported 00:12:37.658 Timestamp: Not Supported 00:12:37.658 Copy: Supported 00:12:37.658 Volatile Write Cache: Present 00:12:37.658 Atomic Write Unit (Normal): 1 00:12:37.658 Atomic Write Unit (PFail): 1 00:12:37.658 Atomic Compare & Write Unit: 1 00:12:37.658 Fused Compare & Write: Supported 00:12:37.658 Scatter-Gather List 00:12:37.658 SGL Command Set: Supported (Dword aligned) 00:12:37.658 SGL Keyed: Not Supported 00:12:37.658 SGL Bit Bucket Descriptor: Not Supported 00:12:37.658 SGL Metadata Pointer: Not Supported 00:12:37.658 Oversized SGL: Not Supported 00:12:37.658 SGL Metadata Address: Not Supported 00:12:37.658 SGL Offset: Not Supported 00:12:37.658 Transport SGL Data Block: Not Supported 00:12:37.658 Replay Protected Memory Block: Not Supported 00:12:37.658 00:12:37.658 Firmware Slot Information 00:12:37.658 ========================= 00:12:37.658 Active slot: 1 00:12:37.658 Slot 1 Firmware Revision: 24.09 00:12:37.658 00:12:37.658 00:12:37.658 Commands Supported and Effects 00:12:37.658 ============================== 00:12:37.658 Admin Commands 00:12:37.658 -------------- 00:12:37.658 Get Log Page (02h): Supported 00:12:37.658 Identify (06h): Supported 00:12:37.658 Abort (08h): Supported 00:12:37.658 Set Features (09h): Supported 00:12:37.658 Get Features (0Ah): Supported 00:12:37.658 Asynchronous Event Request (0Ch): Supported 00:12:37.658 Keep Alive (18h): Supported 00:12:37.658 I/O Commands 00:12:37.658 ------------ 00:12:37.658 Flush (00h): Supported LBA-Change 00:12:37.658 Write (01h): Supported LBA-Change 00:12:37.658 Read (02h): Supported 00:12:37.658 Compare (05h): Supported 00:12:37.659 Write Zeroes (08h): Supported LBA-Change 00:12:37.659 Dataset Management (09h): Supported LBA-Change 00:12:37.659 Copy (19h): Supported LBA-Change 00:12:37.659 00:12:37.659 Error Log 00:12:37.659 ========= 00:12:37.659 00:12:37.659 Arbitration 00:12:37.659 =========== 00:12:37.659 Arbitration Burst: 1 00:12:37.659 00:12:37.659 Power Management 00:12:37.659 ================ 00:12:37.659 Number of Power States: 1 00:12:37.659 Current Power State: Power State #0 00:12:37.659 Power State #0: 00:12:37.659 Max Power: 0.00 W 00:12:37.659 Non-Operational State: Operational 00:12:37.659 Entry Latency: Not Reported 00:12:37.659 Exit Latency: Not Reported 00:12:37.659 Relative Read Throughput: 0 00:12:37.659 Relative Read Latency: 0 00:12:37.659 Relative Write Throughput: 0 00:12:37.659 Relative Write Latency: 0 00:12:37.659 Idle Power: Not Reported 00:12:37.659 Active Power: Not Reported 00:12:37.659 Non-Operational Permissive Mode: Not Supported 00:12:37.659 00:12:37.659 Health Information 00:12:37.659 ================== 00:12:37.659 Critical Warnings: 00:12:37.659 Available Spare Space: OK 00:12:37.659 Temperature: OK 00:12:37.659 Device Reliability: OK 00:12:37.659 Read Only: No 00:12:37.659 Volatile Memory Backup: OK 00:12:37.659 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:37.659 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:37.659 Available Spare: 0% 00:12:37.659 Available Sp[2024-07-16 00:38:55.342322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:37.659 [2024-07-16 00:38:55.342336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:37.659 [2024-07-16 00:38:55.342370] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:37.659 [2024-07-16 00:38:55.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.659 [2024-07-16 00:38:55.342390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.659 [2024-07-16 00:38:55.342398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.659 [2024-07-16 00:38:55.342407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.659 [2024-07-16 00:38:55.346264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:37.659 [2024-07-16 00:38:55.346279] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:37.659 [2024-07-16 00:38:55.347164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.659 [2024-07-16 00:38:55.347240] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:37.659 [2024-07-16 00:38:55.347249] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:37.659 [2024-07-16 00:38:55.348176] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:37.659 [2024-07-16 00:38:55.348190] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:37.659 [2024-07-16 00:38:55.348246] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:37.659 [2024-07-16 00:38:55.350214] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:37.659 are Threshold: 0% 00:12:37.659 Life Percentage Used: 0% 00:12:37.659 Data Units Read: 0 00:12:37.659 Data Units Written: 0 00:12:37.659 Host Read Commands: 0 00:12:37.659 Host Write Commands: 0 00:12:37.659 Controller Busy Time: 0 minutes 00:12:37.659 Power Cycles: 0 00:12:37.659 Power On Hours: 0 hours 00:12:37.659 Unsafe Shutdowns: 0 00:12:37.659 Unrecoverable Media Errors: 0 00:12:37.659 Lifetime Error Log Entries: 0 00:12:37.659 Warning Temperature Time: 0 minutes 00:12:37.659 Critical Temperature Time: 0 minutes 00:12:37.659 00:12:37.659 Number of Queues 00:12:37.659 ================ 00:12:37.659 Number of I/O Submission Queues: 127 00:12:37.659 Number of I/O Completion Queues: 127 00:12:37.659 00:12:37.659 Active Namespaces 00:12:37.659 ================= 00:12:37.659 Namespace ID:1 00:12:37.659 Error Recovery Timeout: Unlimited 00:12:37.659 Command Set Identifier: NVM (00h) 00:12:37.659 Deallocate: Supported 00:12:37.659 Deallocated/Unwritten Error: Not Supported 00:12:37.659 Deallocated Read Value: Unknown 00:12:37.659 Deallocate in Write Zeroes: Not Supported 00:12:37.659 Deallocated Guard Field: 0xFFFF 00:12:37.659 Flush: Supported 00:12:37.659 Reservation: Supported 00:12:37.659 Namespace Sharing Capabilities: Multiple Controllers 00:12:37.659 Size (in LBAs): 131072 (0GiB) 00:12:37.659 Capacity (in LBAs): 131072 (0GiB) 00:12:37.659 Utilization (in LBAs): 131072 (0GiB) 00:12:37.659 NGUID: D856BDD6AF5C4DEBBDFF5E0CA7837E90 00:12:37.659 UUID: d856bdd6-af5c-4deb-bdff-5e0ca7837e90 00:12:37.659 Thin Provisioning: Not Supported 00:12:37.659 Per-NS Atomic Units: Yes 00:12:37.659 Atomic Boundary Size (Normal): 0 00:12:37.659 Atomic Boundary Size (PFail): 0 00:12:37.659 Atomic Boundary Offset: 0 00:12:37.659 Maximum Single Source Range Length: 65535 00:12:37.659 Maximum Copy Length: 65535 00:12:37.659 Maximum Source Range Count: 1 00:12:37.659 NGUID/EUI64 Never Reused: No 00:12:37.659 Namespace Write Protected: No 00:12:37.659 Number of LBA Formats: 1 00:12:37.659 Current LBA Format: LBA Format #00 00:12:37.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:37.659 00:12:37.659 00:38:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:37.659 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.918 [2024-07-16 00:38:55.622079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.236 Initializing NVMe Controllers 00:12:43.236 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.236 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.236 Initialization complete. Launching workers. 00:12:43.236 ======================================================== 00:12:43.236 Latency(us) 00:12:43.236 Device Information : IOPS MiB/s Average min max 00:12:43.236 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 18606.80 72.68 6884.69 2717.59 13566.57 00:12:43.236 ======================================================== 00:12:43.236 Total : 18606.80 72.68 6884.69 2717.59 13566.57 00:12:43.236 00:12:43.236 [2024-07-16 00:39:00.649640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.236 00:39:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:43.236 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.236 [2024-07-16 00:39:00.937574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.596 Initializing NVMe Controllers 00:12:48.596 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.596 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:48.596 Initialization complete. Launching workers. 00:12:48.596 ======================================================== 00:12:48.596 Latency(us) 00:12:48.596 Device Information : IOPS MiB/s Average min max 00:12:48.596 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15331.99 59.89 8353.52 7627.64 15967.28 00:12:48.596 ======================================================== 00:12:48.597 Total : 15331.99 59.89 8353.52 7627.64 15967.28 00:12:48.597 00:12:48.597 [2024-07-16 00:39:05.980940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.597 00:39:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:48.597 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.597 [2024-07-16 00:39:06.279828] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.870 [2024-07-16 00:39:11.378943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.870 Initializing NVMe Controllers 00:12:53.870 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.870 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.870 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:53.870 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:53.870 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:53.870 Initialization complete. Launching workers. 00:12:53.870 Starting thread on core 2 00:12:53.870 Starting thread on core 3 00:12:53.870 Starting thread on core 1 00:12:53.870 00:39:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:53.870 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.129 [2024-07-16 00:39:11.740028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.322 [2024-07-16 00:39:15.449695] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.322 Initializing NVMe Controllers 00:12:58.322 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.322 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:58.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:58.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:58.322 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:58.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:58.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:58.322 Initialization complete. Launching workers. 00:12:58.322 Starting thread on core 1 with urgent priority queue 00:12:58.322 Starting thread on core 2 with urgent priority queue 00:12:58.322 Starting thread on core 3 with urgent priority queue 00:12:58.322 Starting thread on core 0 with urgent priority queue 00:12:58.322 SPDK bdev Controller (SPDK1 ) core 0: 769.67 IO/s 129.93 secs/100000 ios 00:12:58.322 SPDK bdev Controller (SPDK1 ) core 1: 734.33 IO/s 136.18 secs/100000 ios 00:12:58.322 SPDK bdev Controller (SPDK1 ) core 2: 775.33 IO/s 128.98 secs/100000 ios 00:12:58.322 SPDK bdev Controller (SPDK1 ) core 3: 629.33 IO/s 158.90 secs/100000 ios 00:12:58.322 ======================================================== 00:12:58.322 00:12:58.322 00:39:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:58.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.322 [2024-07-16 00:39:15.768052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.322 Initializing NVMe Controllers 00:12:58.322 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.322 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:58.322 Namespace ID: 1 size: 0GB 00:12:58.322 Initialization complete. 00:12:58.322 INFO: using host memory buffer for IO 00:12:58.322 Hello world! 00:12:58.322 [2024-07-16 00:39:15.801449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.322 00:39:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:58.322 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.322 [2024-07-16 00:39:16.131880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.701 Initializing NVMe Controllers 00:12:59.701 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.701 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:59.701 Initialization complete. Launching workers. 00:12:59.701 submit (in ns) avg, min, max = 9474.2, 4562.7, 4003260.0 00:12:59.701 complete (in ns) avg, min, max = 49273.3, 2729.1, 4001950.9 00:12:59.701 00:12:59.701 Submit histogram 00:12:59.701 ================ 00:12:59.701 Range in us Cumulative Count 00:12:59.701 4.538 - 4.567: 0.0290% ( 2) 00:12:59.701 4.567 - 4.596: 0.7258% ( 48) 00:12:59.701 4.596 - 4.625: 1.9887% ( 87) 00:12:59.701 4.625 - 4.655: 4.9644% ( 205) 00:12:59.701 4.655 - 4.684: 10.0740% ( 352) 00:12:59.701 4.684 - 4.713: 19.3787% ( 641) 00:12:59.701 4.713 - 4.742: 29.5689% ( 702) 00:12:59.701 4.742 - 4.771: 40.9784% ( 786) 00:12:59.701 4.771 - 4.800: 52.8088% ( 815) 00:12:59.701 4.800 - 4.829: 63.2022% ( 716) 00:12:59.701 4.829 - 4.858: 72.5069% ( 641) 00:12:59.701 4.858 - 4.887: 78.7487% ( 430) 00:12:59.701 4.887 - 4.916: 83.1325% ( 302) 00:12:59.701 4.916 - 4.945: 85.7454% ( 180) 00:12:59.701 4.945 - 4.975: 87.7196% ( 136) 00:12:59.701 4.975 - 5.004: 89.2292% ( 104) 00:12:59.701 5.004 - 5.033: 91.1453% ( 132) 00:12:59.701 5.033 - 5.062: 93.0904% ( 134) 00:12:59.701 5.062 - 5.091: 94.8178% ( 119) 00:12:59.701 5.091 - 5.120: 96.5452% ( 119) 00:12:59.701 5.120 - 5.149: 97.6194% ( 74) 00:12:59.701 5.149 - 5.178: 98.3887% ( 53) 00:12:59.701 5.178 - 5.207: 98.8823% ( 34) 00:12:59.701 5.207 - 5.236: 99.1145% ( 16) 00:12:59.701 5.236 - 5.265: 99.3468% ( 16) 00:12:59.701 5.265 - 5.295: 99.4194% ( 5) 00:12:59.701 5.295 - 5.324: 99.4484% ( 2) 00:12:59.701 5.324 - 5.353: 99.4774% ( 2) 00:12:59.701 5.353 - 5.382: 99.4919% ( 1) 00:12:59.701 5.382 - 5.411: 99.5065% ( 1) 00:12:59.701 5.527 - 5.556: 99.5210% ( 1) 00:12:59.701 5.615 - 5.644: 99.5355% ( 1) 00:12:59.701 7.564 - 7.622: 99.5500% ( 1) 00:12:59.701 7.971 - 8.029: 99.5790% ( 2) 00:12:59.701 8.378 - 8.436: 99.5936% ( 1) 00:12:59.701 8.553 - 8.611: 99.6081% ( 1) 00:12:59.701 9.018 - 9.076: 99.6226% ( 1) 00:12:59.701 9.309 - 9.367: 99.6371% ( 1) 00:12:59.701 9.484 - 9.542: 99.6516% ( 1) 00:12:59.701 9.716 - 9.775: 99.6807% ( 2) 00:12:59.701 9.775 - 9.833: 99.6952% ( 1) 00:12:59.701 9.833 - 9.891: 99.7097% ( 1) 00:12:59.701 9.949 - 10.007: 99.7242% ( 1) 00:12:59.701 10.065 - 10.124: 99.7387% ( 1) 00:12:59.701 10.124 - 10.182: 99.7532% ( 1) 00:12:59.701 10.240 - 10.298: 99.7823% ( 2) 00:12:59.701 10.415 - 10.473: 99.7968% ( 1) 00:12:59.701 10.705 - 10.764: 99.8113% ( 1) 00:12:59.701 10.764 - 10.822: 99.8258% ( 1) 00:12:59.701 10.938 - 10.996: 99.8403% ( 1) 00:12:59.701 11.695 - 11.753: 99.8694% ( 2) 00:12:59.701 11.753 - 11.811: 99.8839% ( 1) 00:12:59.701 3991.738 - 4021.527: 100.0000% ( 8) 00:12:59.701 00:12:59.701 Complete histogram 00:12:59.701 ================== 00:12:59.701 Range in us Cumulative Count 00:12:59.701 2.720 - 2.735: 0.0871% ( 6) 00:12:59.701 2.735 - 2.749: 1.1177% ( 71) 00:12:59.701 2.749 - 2.764: 4.3693% ( 224) 00:12:59.701 2.764 - 2.778: 7.4902% ( 215) 00:12:59.701 2.778 - 2.793: 8.7386% ( 86) 00:12:59.701 2.793 - 2.807: 10.8869% ( 148) 00:12:59.701 2.807 - 2.822: 31.7608% ( 1438) 00:12:59.701 2.822 - 2.836: 73.4504% ( 2872) 00:12:59.701 2.836 - 2.851: 88.0534% ( 1006) 00:12:59.701 2.851 - 2.865: 91.1308% ( 212) 00:12:59.701 2.865 - 2.880: 93.2356% ( 145) 00:12:59.701 2.880 - 2.895: 94.0195% ( 54) 00:12:59.701 2.895 - 2.909: 94.5420% ( 36) 00:12:59.701 2.909 - 2.924: 95.7323% ( 82) 00:12:59.701 2.924 - 2.938: 97.3581% ( 112) 00:12:59.701 2.938 - 2.953: 98.0258% ( 46) 00:12:59.701 2.953 - 2.967: 98.2291% ( 14) 00:12:59.701 2.967 - 2.982: 98.3307% ( 7) 00:12:59.701 2.982 - 2.996: 98.3887% ( 4) 00:12:59.701 3.011 - 3.025: 98.4178% ( 2) 00:12:59.701 3.025 - 3.040: 98.4468% ( 2) 00:12:59.701 3.040 - 3.055: 98.4613% ( 1) 00:12:59.701 3.055 - 3.069: 98.4903% ( 2) 00:12:59.701 3.069 - 3.084: 98.5049% ( 1) 00:12:59.701 3.156 - [2024-07-16 00:39:17.155843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.701 3.171: 98.5339% ( 2) 00:12:59.701 3.200 - 3.215: 98.5629% ( 2) 00:12:59.701 3.273 - 3.287: 98.5920% ( 2) 00:12:59.701 3.418 - 3.433: 98.6065% ( 1) 00:12:59.701 3.564 - 3.578: 98.6210% ( 1) 00:12:59.701 3.593 - 3.607: 98.6355% ( 1) 00:12:59.701 3.622 - 3.636: 98.6500% ( 1) 00:12:59.701 5.818 - 5.847: 98.6645% ( 1) 00:12:59.701 5.847 - 5.876: 98.6791% ( 1) 00:12:59.701 6.371 - 6.400: 98.6936% ( 1) 00:12:59.701 6.895 - 6.924: 98.7081% ( 1) 00:12:59.701 7.244 - 7.273: 98.7226% ( 1) 00:12:59.701 7.389 - 7.418: 98.7371% ( 1) 00:12:59.701 7.564 - 7.622: 98.7516% ( 1) 00:12:59.701 7.855 - 7.913: 98.7661% ( 1) 00:12:59.701 8.029 - 8.087: 98.7952% ( 2) 00:12:59.701 8.145 - 8.204: 98.8097% ( 1) 00:12:59.701 15.127 - 15.244: 98.8242% ( 1) 00:12:59.701 166.633 - 167.564: 98.8387% ( 1) 00:12:59.701 3991.738 - 4021.527: 100.0000% ( 80) 00:12:59.701 00:12:59.701 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:59.701 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:59.701 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:59.701 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:59.701 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:59.701 [ 00:12:59.701 { 00:12:59.701 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.701 "subtype": "Discovery", 00:12:59.701 "listen_addresses": [], 00:12:59.701 "allow_any_host": true, 00:12:59.701 "hosts": [] 00:12:59.701 }, 00:12:59.701 { 00:12:59.701 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:59.701 "subtype": "NVMe", 00:12:59.701 "listen_addresses": [ 00:12:59.702 { 00:12:59.702 "trtype": "VFIOUSER", 00:12:59.702 "adrfam": "IPv4", 00:12:59.702 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:59.702 "trsvcid": "0" 00:12:59.702 } 00:12:59.702 ], 00:12:59.702 "allow_any_host": true, 00:12:59.702 "hosts": [], 00:12:59.702 "serial_number": "SPDK1", 00:12:59.702 "model_number": "SPDK bdev Controller", 00:12:59.702 "max_namespaces": 32, 00:12:59.702 "min_cntlid": 1, 00:12:59.702 "max_cntlid": 65519, 00:12:59.702 "namespaces": [ 00:12:59.702 { 00:12:59.702 "nsid": 1, 00:12:59.702 "bdev_name": "Malloc1", 00:12:59.702 "name": "Malloc1", 00:12:59.702 "nguid": "D856BDD6AF5C4DEBBDFF5E0CA7837E90", 00:12:59.702 "uuid": "d856bdd6-af5c-4deb-bdff-5e0ca7837e90" 00:12:59.702 } 00:12:59.702 ] 00:12:59.702 }, 00:12:59.702 { 00:12:59.702 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:59.702 "subtype": "NVMe", 00:12:59.702 "listen_addresses": [ 00:12:59.702 { 00:12:59.702 "trtype": "VFIOUSER", 00:12:59.702 "adrfam": "IPv4", 00:12:59.702 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:59.702 "trsvcid": "0" 00:12:59.702 } 00:12:59.702 ], 00:12:59.702 "allow_any_host": true, 00:12:59.702 "hosts": [], 00:12:59.702 "serial_number": "SPDK2", 00:12:59.702 "model_number": "SPDK bdev Controller", 00:12:59.702 "max_namespaces": 32, 00:12:59.702 "min_cntlid": 1, 00:12:59.702 "max_cntlid": 65519, 00:12:59.702 "namespaces": [ 00:12:59.702 { 00:12:59.702 "nsid": 1, 00:12:59.702 "bdev_name": "Malloc2", 00:12:59.702 "name": "Malloc2", 00:12:59.702 "nguid": "3FE539CF8A6340318A06863EDA5F0702", 00:12:59.702 "uuid": "3fe539cf-8a63-4031-8a06-863eda5f0702" 00:12:59.702 } 00:12:59.702 ] 00:12:59.702 } 00:12:59.702 ] 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2960100 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:59.702 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:59.702 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.961 [2024-07-16 00:39:17.666056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.961 Malloc3 00:12:59.961 00:39:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:00.221 [2024-07-16 00:39:17.993920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.221 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:00.221 Asynchronous Event Request test 00:13:00.221 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.221 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:00.221 Registering asynchronous event callbacks... 00:13:00.221 Starting namespace attribute notice tests for all controllers... 00:13:00.221 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:00.221 aer_cb - Changed Namespace 00:13:00.221 Cleaning up... 00:13:00.481 [ 00:13:00.481 { 00:13:00.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:00.481 "subtype": "Discovery", 00:13:00.481 "listen_addresses": [], 00:13:00.481 "allow_any_host": true, 00:13:00.481 "hosts": [] 00:13:00.481 }, 00:13:00.481 { 00:13:00.481 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:00.481 "subtype": "NVMe", 00:13:00.481 "listen_addresses": [ 00:13:00.481 { 00:13:00.481 "trtype": "VFIOUSER", 00:13:00.481 "adrfam": "IPv4", 00:13:00.481 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:00.481 "trsvcid": "0" 00:13:00.481 } 00:13:00.481 ], 00:13:00.481 "allow_any_host": true, 00:13:00.481 "hosts": [], 00:13:00.481 "serial_number": "SPDK1", 00:13:00.481 "model_number": "SPDK bdev Controller", 00:13:00.481 "max_namespaces": 32, 00:13:00.481 "min_cntlid": 1, 00:13:00.481 "max_cntlid": 65519, 00:13:00.481 "namespaces": [ 00:13:00.481 { 00:13:00.481 "nsid": 1, 00:13:00.481 "bdev_name": "Malloc1", 00:13:00.481 "name": "Malloc1", 00:13:00.481 "nguid": "D856BDD6AF5C4DEBBDFF5E0CA7837E90", 00:13:00.481 "uuid": "d856bdd6-af5c-4deb-bdff-5e0ca7837e90" 00:13:00.481 }, 00:13:00.481 { 00:13:00.481 "nsid": 2, 00:13:00.481 "bdev_name": "Malloc3", 00:13:00.481 "name": "Malloc3", 00:13:00.481 "nguid": "84E34CBC35744E289B5877E6389AEA65", 00:13:00.481 "uuid": "84e34cbc-3574-4e28-9b58-77e6389aea65" 00:13:00.481 } 00:13:00.481 ] 00:13:00.481 }, 00:13:00.481 { 00:13:00.481 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:00.481 "subtype": "NVMe", 00:13:00.481 "listen_addresses": [ 00:13:00.481 { 00:13:00.481 "trtype": "VFIOUSER", 00:13:00.481 "adrfam": "IPv4", 00:13:00.481 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:00.481 "trsvcid": "0" 00:13:00.481 } 00:13:00.481 ], 00:13:00.481 "allow_any_host": true, 00:13:00.481 "hosts": [], 00:13:00.481 "serial_number": "SPDK2", 00:13:00.481 "model_number": "SPDK bdev Controller", 00:13:00.481 "max_namespaces": 32, 00:13:00.481 "min_cntlid": 1, 00:13:00.481 "max_cntlid": 65519, 00:13:00.481 "namespaces": [ 00:13:00.481 { 00:13:00.481 "nsid": 1, 00:13:00.481 "bdev_name": "Malloc2", 00:13:00.481 "name": "Malloc2", 00:13:00.481 "nguid": "3FE539CF8A6340318A06863EDA5F0702", 00:13:00.481 "uuid": "3fe539cf-8a63-4031-8a06-863eda5f0702" 00:13:00.481 } 00:13:00.481 ] 00:13:00.481 } 00:13:00.481 ] 00:13:00.481 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2960100 00:13:00.481 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.481 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:00.481 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:00.481 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:00.481 [2024-07-16 00:39:18.299512] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:00.481 [2024-07-16 00:39:18.299551] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960160 ] 00:13:00.481 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.742 [2024-07-16 00:39:18.338055] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:00.743 [2024-07-16 00:39:18.346571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:00.743 [2024-07-16 00:39:18.346598] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa5cfa95000 00:13:00.743 [2024-07-16 00:39:18.347583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.348585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.349589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.350604] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.355264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.355648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.356660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.357663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:00.743 [2024-07-16 00:39:18.358680] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:00.743 [2024-07-16 00:39:18.358694] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa5cfa8a000 00:13:00.743 [2024-07-16 00:39:18.360286] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:00.743 [2024-07-16 00:39:18.378323] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:00.743 [2024-07-16 00:39:18.378347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:00.743 [2024-07-16 00:39:18.383440] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:00.743 [2024-07-16 00:39:18.383493] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:00.743 [2024-07-16 00:39:18.383587] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:00.743 [2024-07-16 00:39:18.383605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:00.743 [2024-07-16 00:39:18.383613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:00.743 [2024-07-16 00:39:18.384445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:00.743 [2024-07-16 00:39:18.384458] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:00.743 [2024-07-16 00:39:18.384467] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:00.743 [2024-07-16 00:39:18.385452] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:00.743 [2024-07-16 00:39:18.385465] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:00.743 [2024-07-16 00:39:18.385474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.386461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:00.743 [2024-07-16 00:39:18.386474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.387471] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:00.743 [2024-07-16 00:39:18.387484] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:00.743 [2024-07-16 00:39:18.387490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.387498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.387606] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:00.743 [2024-07-16 00:39:18.387612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.387618] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:00.743 [2024-07-16 00:39:18.388488] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:00.743 [2024-07-16 00:39:18.389495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:00.743 [2024-07-16 00:39:18.390510] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:00.743 [2024-07-16 00:39:18.391517] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.743 [2024-07-16 00:39:18.391569] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:00.743 [2024-07-16 00:39:18.392534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:00.743 [2024-07-16 00:39:18.392546] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:00.743 [2024-07-16 00:39:18.392552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.392577] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:00.743 [2024-07-16 00:39:18.392592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.392610] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.743 [2024-07-16 00:39:18.392617] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.743 [2024-07-16 00:39:18.392630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.743 [2024-07-16 00:39:18.401266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:00.743 [2024-07-16 00:39:18.401282] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:00.743 [2024-07-16 00:39:18.401288] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:00.743 [2024-07-16 00:39:18.401293] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:00.743 [2024-07-16 00:39:18.401299] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:00.743 [2024-07-16 00:39:18.401306] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:00.743 [2024-07-16 00:39:18.401311] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:00.743 [2024-07-16 00:39:18.401317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.401327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.401343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:00.743 [2024-07-16 00:39:18.409274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:00.743 [2024-07-16 00:39:18.409291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.743 [2024-07-16 00:39:18.409302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.743 [2024-07-16 00:39:18.409312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.743 [2024-07-16 00:39:18.409323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:00.743 [2024-07-16 00:39:18.409329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.409341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.409353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:00.743 [2024-07-16 00:39:18.417263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:00.743 [2024-07-16 00:39:18.417274] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:00.743 [2024-07-16 00:39:18.417281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.417293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.417304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.417316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:00.743 [2024-07-16 00:39:18.425265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:00.743 [2024-07-16 00:39:18.425345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.425356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.425366] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:00.743 [2024-07-16 00:39:18.425371] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:00.743 [2024-07-16 00:39:18.425380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:00.743 [2024-07-16 00:39:18.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:00.743 [2024-07-16 00:39:18.433277] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:00.743 [2024-07-16 00:39:18.433289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:00.743 [2024-07-16 00:39:18.433299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.433309] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.744 [2024-07-16 00:39:18.433315] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.744 [2024-07-16 00:39:18.433323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.441266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.441284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.441293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.441303] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:00.744 [2024-07-16 00:39:18.441309] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.744 [2024-07-16 00:39:18.441317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.449265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.449279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449329] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:00.744 [2024-07-16 00:39:18.449335] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:00.744 [2024-07-16 00:39:18.449341] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:00.744 [2024-07-16 00:39:18.449360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.457265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.457284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.465264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.465281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.473265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.473282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.481264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.481286] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:00.744 [2024-07-16 00:39:18.481292] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:00.744 [2024-07-16 00:39:18.481297] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:00.744 [2024-07-16 00:39:18.481301] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:00.744 [2024-07-16 00:39:18.481309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:00.744 [2024-07-16 00:39:18.481318] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:00.744 [2024-07-16 00:39:18.481324] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:00.744 [2024-07-16 00:39:18.481332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.481341] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:00.744 [2024-07-16 00:39:18.481347] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:00.744 [2024-07-16 00:39:18.481354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.481364] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:00.744 [2024-07-16 00:39:18.481369] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:00.744 [2024-07-16 00:39:18.481376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:00.744 [2024-07-16 00:39:18.489266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.489285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.489298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:00.744 [2024-07-16 00:39:18.489307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:00.744 ===================================================== 00:13:00.744 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:00.744 ===================================================== 00:13:00.744 Controller Capabilities/Features 00:13:00.744 ================================ 00:13:00.744 Vendor ID: 4e58 00:13:00.744 Subsystem Vendor ID: 4e58 00:13:00.744 Serial Number: SPDK2 00:13:00.744 Model Number: SPDK bdev Controller 00:13:00.744 Firmware Version: 24.09 00:13:00.744 Recommended Arb Burst: 6 00:13:00.744 IEEE OUI Identifier: 8d 6b 50 00:13:00.744 Multi-path I/O 00:13:00.744 May have multiple subsystem ports: Yes 00:13:00.744 May have multiple controllers: Yes 00:13:00.744 Associated with SR-IOV VF: No 00:13:00.744 Max Data Transfer Size: 131072 00:13:00.744 Max Number of Namespaces: 32 00:13:00.744 Max Number of I/O Queues: 127 00:13:00.744 NVMe Specification Version (VS): 1.3 00:13:00.744 NVMe Specification Version (Identify): 1.3 00:13:00.744 Maximum Queue Entries: 256 00:13:00.744 Contiguous Queues Required: Yes 00:13:00.744 Arbitration Mechanisms Supported 00:13:00.744 Weighted Round Robin: Not Supported 00:13:00.744 Vendor Specific: Not Supported 00:13:00.744 Reset Timeout: 15000 ms 00:13:00.744 Doorbell Stride: 4 bytes 00:13:00.744 NVM Subsystem Reset: Not Supported 00:13:00.744 Command Sets Supported 00:13:00.744 NVM Command Set: Supported 00:13:00.744 Boot Partition: Not Supported 00:13:00.744 Memory Page Size Minimum: 4096 bytes 00:13:00.744 Memory Page Size Maximum: 4096 bytes 00:13:00.744 Persistent Memory Region: Not Supported 00:13:00.744 Optional Asynchronous Events Supported 00:13:00.744 Namespace Attribute Notices: Supported 00:13:00.744 Firmware Activation Notices: Not Supported 00:13:00.744 ANA Change Notices: Not Supported 00:13:00.744 PLE Aggregate Log Change Notices: Not Supported 00:13:00.744 LBA Status Info Alert Notices: Not Supported 00:13:00.744 EGE Aggregate Log Change Notices: Not Supported 00:13:00.744 Normal NVM Subsystem Shutdown event: Not Supported 00:13:00.744 Zone Descriptor Change Notices: Not Supported 00:13:00.744 Discovery Log Change Notices: Not Supported 00:13:00.744 Controller Attributes 00:13:00.744 128-bit Host Identifier: Supported 00:13:00.744 Non-Operational Permissive Mode: Not Supported 00:13:00.744 NVM Sets: Not Supported 00:13:00.744 Read Recovery Levels: Not Supported 00:13:00.744 Endurance Groups: Not Supported 00:13:00.744 Predictable Latency Mode: Not Supported 00:13:00.744 Traffic Based Keep ALive: Not Supported 00:13:00.744 Namespace Granularity: Not Supported 00:13:00.744 SQ Associations: Not Supported 00:13:00.744 UUID List: Not Supported 00:13:00.744 Multi-Domain Subsystem: Not Supported 00:13:00.744 Fixed Capacity Management: Not Supported 00:13:00.744 Variable Capacity Management: Not Supported 00:13:00.744 Delete Endurance Group: Not Supported 00:13:00.744 Delete NVM Set: Not Supported 00:13:00.744 Extended LBA Formats Supported: Not Supported 00:13:00.744 Flexible Data Placement Supported: Not Supported 00:13:00.744 00:13:00.744 Controller Memory Buffer Support 00:13:00.744 ================================ 00:13:00.744 Supported: No 00:13:00.744 00:13:00.744 Persistent Memory Region Support 00:13:00.744 ================================ 00:13:00.744 Supported: No 00:13:00.744 00:13:00.744 Admin Command Set Attributes 00:13:00.744 ============================ 00:13:00.744 Security Send/Receive: Not Supported 00:13:00.744 Format NVM: Not Supported 00:13:00.744 Firmware Activate/Download: Not Supported 00:13:00.744 Namespace Management: Not Supported 00:13:00.744 Device Self-Test: Not Supported 00:13:00.744 Directives: Not Supported 00:13:00.744 NVMe-MI: Not Supported 00:13:00.744 Virtualization Management: Not Supported 00:13:00.744 Doorbell Buffer Config: Not Supported 00:13:00.744 Get LBA Status Capability: Not Supported 00:13:00.744 Command & Feature Lockdown Capability: Not Supported 00:13:00.744 Abort Command Limit: 4 00:13:00.744 Async Event Request Limit: 4 00:13:00.744 Number of Firmware Slots: N/A 00:13:00.744 Firmware Slot 1 Read-Only: N/A 00:13:00.744 Firmware Activation Without Reset: N/A 00:13:00.744 Multiple Update Detection Support: N/A 00:13:00.744 Firmware Update Granularity: No Information Provided 00:13:00.744 Per-Namespace SMART Log: No 00:13:00.744 Asymmetric Namespace Access Log Page: Not Supported 00:13:00.744 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:00.744 Command Effects Log Page: Supported 00:13:00.744 Get Log Page Extended Data: Supported 00:13:00.744 Telemetry Log Pages: Not Supported 00:13:00.744 Persistent Event Log Pages: Not Supported 00:13:00.744 Supported Log Pages Log Page: May Support 00:13:00.745 Commands Supported & Effects Log Page: Not Supported 00:13:00.745 Feature Identifiers & Effects Log Page:May Support 00:13:00.745 NVMe-MI Commands & Effects Log Page: May Support 00:13:00.745 Data Area 4 for Telemetry Log: Not Supported 00:13:00.745 Error Log Page Entries Supported: 128 00:13:00.745 Keep Alive: Supported 00:13:00.745 Keep Alive Granularity: 10000 ms 00:13:00.745 00:13:00.745 NVM Command Set Attributes 00:13:00.745 ========================== 00:13:00.745 Submission Queue Entry Size 00:13:00.745 Max: 64 00:13:00.745 Min: 64 00:13:00.745 Completion Queue Entry Size 00:13:00.745 Max: 16 00:13:00.745 Min: 16 00:13:00.745 Number of Namespaces: 32 00:13:00.745 Compare Command: Supported 00:13:00.745 Write Uncorrectable Command: Not Supported 00:13:00.745 Dataset Management Command: Supported 00:13:00.745 Write Zeroes Command: Supported 00:13:00.745 Set Features Save Field: Not Supported 00:13:00.745 Reservations: Not Supported 00:13:00.745 Timestamp: Not Supported 00:13:00.745 Copy: Supported 00:13:00.745 Volatile Write Cache: Present 00:13:00.745 Atomic Write Unit (Normal): 1 00:13:00.745 Atomic Write Unit (PFail): 1 00:13:00.745 Atomic Compare & Write Unit: 1 00:13:00.745 Fused Compare & Write: Supported 00:13:00.745 Scatter-Gather List 00:13:00.745 SGL Command Set: Supported (Dword aligned) 00:13:00.745 SGL Keyed: Not Supported 00:13:00.745 SGL Bit Bucket Descriptor: Not Supported 00:13:00.745 SGL Metadata Pointer: Not Supported 00:13:00.745 Oversized SGL: Not Supported 00:13:00.745 SGL Metadata Address: Not Supported 00:13:00.745 SGL Offset: Not Supported 00:13:00.745 Transport SGL Data Block: Not Supported 00:13:00.745 Replay Protected Memory Block: Not Supported 00:13:00.745 00:13:00.745 Firmware Slot Information 00:13:00.745 ========================= 00:13:00.745 Active slot: 1 00:13:00.745 Slot 1 Firmware Revision: 24.09 00:13:00.745 00:13:00.745 00:13:00.745 Commands Supported and Effects 00:13:00.745 ============================== 00:13:00.745 Admin Commands 00:13:00.745 -------------- 00:13:00.745 Get Log Page (02h): Supported 00:13:00.745 Identify (06h): Supported 00:13:00.745 Abort (08h): Supported 00:13:00.745 Set Features (09h): Supported 00:13:00.745 Get Features (0Ah): Supported 00:13:00.745 Asynchronous Event Request (0Ch): Supported 00:13:00.745 Keep Alive (18h): Supported 00:13:00.745 I/O Commands 00:13:00.745 ------------ 00:13:00.745 Flush (00h): Supported LBA-Change 00:13:00.745 Write (01h): Supported LBA-Change 00:13:00.745 Read (02h): Supported 00:13:00.745 Compare (05h): Supported 00:13:00.745 Write Zeroes (08h): Supported LBA-Change 00:13:00.745 Dataset Management (09h): Supported LBA-Change 00:13:00.745 Copy (19h): Supported LBA-Change 00:13:00.745 00:13:00.745 Error Log 00:13:00.745 ========= 00:13:00.745 00:13:00.745 Arbitration 00:13:00.745 =========== 00:13:00.745 Arbitration Burst: 1 00:13:00.745 00:13:00.745 Power Management 00:13:00.745 ================ 00:13:00.745 Number of Power States: 1 00:13:00.745 Current Power State: Power State #0 00:13:00.745 Power State #0: 00:13:00.745 Max Power: 0.00 W 00:13:00.745 Non-Operational State: Operational 00:13:00.745 Entry Latency: Not Reported 00:13:00.745 Exit Latency: Not Reported 00:13:00.745 Relative Read Throughput: 0 00:13:00.745 Relative Read Latency: 0 00:13:00.745 Relative Write Throughput: 0 00:13:00.745 Relative Write Latency: 0 00:13:00.745 Idle Power: Not Reported 00:13:00.745 Active Power: Not Reported 00:13:00.745 Non-Operational Permissive Mode: Not Supported 00:13:00.745 00:13:00.745 Health Information 00:13:00.745 ================== 00:13:00.745 Critical Warnings: 00:13:00.745 Available Spare Space: OK 00:13:00.745 Temperature: OK 00:13:00.745 Device Reliability: OK 00:13:00.745 Read Only: No 00:13:00.745 Volatile Memory Backup: OK 00:13:00.745 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:00.745 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:00.745 Available Spare: 0% 00:13:00.745 Available Sp[2024-07-16 00:39:18.489426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:00.745 [2024-07-16 00:39:18.497265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:00.745 [2024-07-16 00:39:18.497302] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:00.745 [2024-07-16 00:39:18.497314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.745 [2024-07-16 00:39:18.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.745 [2024-07-16 00:39:18.497331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.745 [2024-07-16 00:39:18.497339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:00.745 [2024-07-16 00:39:18.497418] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:00.745 [2024-07-16 00:39:18.497431] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:00.745 [2024-07-16 00:39:18.498433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.745 [2024-07-16 00:39:18.498495] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:00.745 [2024-07-16 00:39:18.498504] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:00.745 [2024-07-16 00:39:18.499434] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:00.745 [2024-07-16 00:39:18.499450] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:00.745 [2024-07-16 00:39:18.499505] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:00.745 [2024-07-16 00:39:18.501150] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:00.745 are Threshold: 0% 00:13:00.745 Life Percentage Used: 0% 00:13:00.745 Data Units Read: 0 00:13:00.745 Data Units Written: 0 00:13:00.745 Host Read Commands: 0 00:13:00.745 Host Write Commands: 0 00:13:00.745 Controller Busy Time: 0 minutes 00:13:00.745 Power Cycles: 0 00:13:00.745 Power On Hours: 0 hours 00:13:00.745 Unsafe Shutdowns: 0 00:13:00.745 Unrecoverable Media Errors: 0 00:13:00.745 Lifetime Error Log Entries: 0 00:13:00.745 Warning Temperature Time: 0 minutes 00:13:00.745 Critical Temperature Time: 0 minutes 00:13:00.745 00:13:00.745 Number of Queues 00:13:00.745 ================ 00:13:00.745 Number of I/O Submission Queues: 127 00:13:00.745 Number of I/O Completion Queues: 127 00:13:00.745 00:13:00.745 Active Namespaces 00:13:00.745 ================= 00:13:00.745 Namespace ID:1 00:13:00.745 Error Recovery Timeout: Unlimited 00:13:00.745 Command Set Identifier: NVM (00h) 00:13:00.745 Deallocate: Supported 00:13:00.745 Deallocated/Unwritten Error: Not Supported 00:13:00.745 Deallocated Read Value: Unknown 00:13:00.745 Deallocate in Write Zeroes: Not Supported 00:13:00.745 Deallocated Guard Field: 0xFFFF 00:13:00.745 Flush: Supported 00:13:00.745 Reservation: Supported 00:13:00.745 Namespace Sharing Capabilities: Multiple Controllers 00:13:00.745 Size (in LBAs): 131072 (0GiB) 00:13:00.745 Capacity (in LBAs): 131072 (0GiB) 00:13:00.745 Utilization (in LBAs): 131072 (0GiB) 00:13:00.745 NGUID: 3FE539CF8A6340318A06863EDA5F0702 00:13:00.745 UUID: 3fe539cf-8a63-4031-8a06-863eda5f0702 00:13:00.745 Thin Provisioning: Not Supported 00:13:00.745 Per-NS Atomic Units: Yes 00:13:00.745 Atomic Boundary Size (Normal): 0 00:13:00.745 Atomic Boundary Size (PFail): 0 00:13:00.745 Atomic Boundary Offset: 0 00:13:00.745 Maximum Single Source Range Length: 65535 00:13:00.745 Maximum Copy Length: 65535 00:13:00.745 Maximum Source Range Count: 1 00:13:00.745 NGUID/EUI64 Never Reused: No 00:13:00.745 Namespace Write Protected: No 00:13:00.745 Number of LBA Formats: 1 00:13:00.745 Current LBA Format: LBA Format #00 00:13:00.745 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:00.745 00:13:00.745 00:39:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:01.005 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.005 [2024-07-16 00:39:18.771576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.272 Initializing NVMe Controllers 00:13:06.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:06.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:06.272 Initialization complete. Launching workers. 00:13:06.272 ======================================================== 00:13:06.272 Latency(us) 00:13:06.272 Device Information : IOPS MiB/s Average min max 00:13:06.272 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 18625.90 72.76 6872.98 2716.24 13443.51 00:13:06.272 ======================================================== 00:13:06.272 Total : 18625.90 72.76 6872.98 2716.24 13443.51 00:13:06.272 00:13:06.272 [2024-07-16 00:39:23.877576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.272 00:39:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:06.272 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.530 [2024-07-16 00:39:24.177829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.799 Initializing NVMe Controllers 00:13:11.799 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.799 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:11.799 Initialization complete. Launching workers. 00:13:11.799 ======================================================== 00:13:11.799 Latency(us) 00:13:11.799 Device Information : IOPS MiB/s Average min max 00:13:11.799 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24102.97 94.15 5311.17 1567.47 11425.21 00:13:11.799 ======================================================== 00:13:11.799 Total : 24102.97 94.15 5311.17 1567.47 11425.21 00:13:11.799 00:13:11.799 [2024-07-16 00:39:29.201095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.799 00:39:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:11.799 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.799 [2024-07-16 00:39:29.493680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.073 [2024-07-16 00:39:34.641386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.073 Initializing NVMe Controllers 00:13:17.073 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:17.073 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:17.073 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:17.073 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:17.073 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:17.073 Initialization complete. Launching workers. 00:13:17.073 Starting thread on core 2 00:13:17.073 Starting thread on core 3 00:13:17.073 Starting thread on core 1 00:13:17.073 00:39:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:17.073 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.331 [2024-07-16 00:39:35.001997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:20.616 [2024-07-16 00:39:38.083758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:20.616 Initializing NVMe Controllers 00:13:20.616 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:20.616 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:20.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:20.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:20.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:20.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:20.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:20.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:20.616 Initialization complete. Launching workers. 00:13:20.616 Starting thread on core 1 with urgent priority queue 00:13:20.616 Starting thread on core 2 with urgent priority queue 00:13:20.616 Starting thread on core 3 with urgent priority queue 00:13:20.616 Starting thread on core 0 with urgent priority queue 00:13:20.616 SPDK bdev Controller (SPDK2 ) core 0: 6623.67 IO/s 15.10 secs/100000 ios 00:13:20.616 SPDK bdev Controller (SPDK2 ) core 1: 4168.67 IO/s 23.99 secs/100000 ios 00:13:20.616 SPDK bdev Controller (SPDK2 ) core 2: 3969.00 IO/s 25.20 secs/100000 ios 00:13:20.616 SPDK bdev Controller (SPDK2 ) core 3: 6656.00 IO/s 15.02 secs/100000 ios 00:13:20.616 ======================================================== 00:13:20.616 00:13:20.616 00:39:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:20.616 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.616 [2024-07-16 00:39:38.410533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:20.616 Initializing NVMe Controllers 00:13:20.616 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:20.616 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:20.616 Namespace ID: 1 size: 0GB 00:13:20.616 Initialization complete. 00:13:20.616 INFO: using host memory buffer for IO 00:13:20.616 Hello world! 00:13:20.616 [2024-07-16 00:39:38.422856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:20.874 00:39:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:20.874 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.134 [2024-07-16 00:39:38.760571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.070 Initializing NVMe Controllers 00:13:22.070 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.070 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.070 Initialization complete. Launching workers. 00:13:22.070 submit (in ns) avg, min, max = 9798.9, 4562.7, 4003227.3 00:13:22.070 complete (in ns) avg, min, max = 34785.0, 2716.4, 6990480.9 00:13:22.070 00:13:22.070 Submit histogram 00:13:22.070 ================ 00:13:22.070 Range in us Cumulative Count 00:13:22.070 4.538 - 4.567: 0.0107% ( 1) 00:13:22.070 4.567 - 4.596: 0.2999% ( 27) 00:13:22.070 4.596 - 4.625: 1.2101% ( 85) 00:13:22.070 4.625 - 4.655: 2.7094% ( 140) 00:13:22.070 4.655 - 4.684: 5.2260% ( 235) 00:13:22.070 4.684 - 4.713: 12.1332% ( 645) 00:13:22.070 4.713 - 4.742: 19.3189% ( 671) 00:13:22.070 4.742 - 4.771: 30.5847% ( 1052) 00:13:22.070 4.771 - 4.800: 44.2600% ( 1277) 00:13:22.070 4.800 - 4.829: 54.9368% ( 997) 00:13:22.070 4.829 - 4.858: 65.0675% ( 946) 00:13:22.070 4.858 - 4.887: 73.4954% ( 787) 00:13:22.070 4.887 - 4.916: 79.6744% ( 577) 00:13:22.070 4.916 - 4.945: 84.0437% ( 408) 00:13:22.070 4.945 - 4.975: 86.3247% ( 213) 00:13:22.070 4.975 - 5.004: 87.7061% ( 129) 00:13:22.070 5.004 - 5.033: 89.3874% ( 157) 00:13:22.070 5.033 - 5.062: 91.3900% ( 187) 00:13:22.070 5.062 - 5.091: 92.9321% ( 144) 00:13:22.070 5.091 - 5.120: 94.5813% ( 154) 00:13:22.070 5.120 - 5.149: 96.3375% ( 164) 00:13:22.070 5.149 - 5.178: 97.6119% ( 119) 00:13:22.070 5.178 - 5.207: 98.3937% ( 73) 00:13:22.070 5.207 - 5.236: 98.8970% ( 47) 00:13:22.070 5.236 - 5.265: 99.2611% ( 34) 00:13:22.070 5.265 - 5.295: 99.3789% ( 11) 00:13:22.070 5.295 - 5.324: 99.4110% ( 3) 00:13:22.070 5.324 - 5.353: 99.4646% ( 5) 00:13:22.070 5.353 - 5.382: 99.5074% ( 4) 00:13:22.070 5.382 - 5.411: 99.5181% ( 1) 00:13:22.070 5.440 - 5.469: 99.5288% ( 1) 00:13:22.070 5.585 - 5.615: 99.5395% ( 1) 00:13:22.070 5.615 - 5.644: 99.5502% ( 1) 00:13:22.070 5.644 - 5.673: 99.5609% ( 1) 00:13:22.070 5.731 - 5.760: 99.5824% ( 2) 00:13:22.070 5.905 - 5.935: 99.5931% ( 1) 00:13:22.070 5.993 - 6.022: 99.6038% ( 1) 00:13:22.070 6.022 - 6.051: 99.6145% ( 1) 00:13:22.070 8.087 - 8.145: 99.6252% ( 1) 00:13:22.070 8.145 - 8.204: 99.6359% ( 1) 00:13:22.070 8.262 - 8.320: 99.6466% ( 1) 00:13:22.070 8.320 - 8.378: 99.6573% ( 1) 00:13:22.070 8.553 - 8.611: 99.6680% ( 1) 00:13:22.070 8.611 - 8.669: 99.6894% ( 2) 00:13:22.070 8.727 - 8.785: 99.7001% ( 1) 00:13:22.070 8.960 - 9.018: 99.7109% ( 1) 00:13:22.070 9.018 - 9.076: 99.7216% ( 1) 00:13:22.070 9.135 - 9.193: 99.7323% ( 1) 00:13:22.070 9.251 - 9.309: 99.7430% ( 1) 00:13:22.070 9.309 - 9.367: 99.7537% ( 1) 00:13:22.070 9.484 - 9.542: 99.7858% ( 3) 00:13:22.070 9.542 - 9.600: 99.7965% ( 1) 00:13:22.070 9.600 - 9.658: 99.8072% ( 1) 00:13:22.070 9.658 - 9.716: 99.8179% ( 1) 00:13:22.070 9.775 - 9.833: 99.8287% ( 1) 00:13:22.070 10.764 - 10.822: 99.8394% ( 1) 00:13:22.070 11.753 - 11.811: 99.8501% ( 1) 00:13:22.070 12.975 - 13.033: 99.8608% ( 1) 00:13:22.070 13.673 - 13.731: 99.8715% ( 1) 00:13:22.070 2219.287 - 2234.182: 99.8822% ( 1) 00:13:22.070 3991.738 - 4021.527: 100.0000% ( 11) 00:13:22.070 00:13:22.070 Complete histogram 00:13:22.070 ================== 00:13:22.070 Range in us Cumulative Count 00:13:22.070 2.705 - 2.720: 0.0321% ( 3) 00:13:22.070 2.720 - 2.735: 3.0628% ( 283) 00:13:22.071 2.735 - 2.749: 26.6759% ( 2205) 00:13:22.071 2.749 - 2.764: 50.4069% ( 2216) 00:13:22.071 2.764 - 2.778: 56.2326% ( 544) 00:13:22.071 2.778 - 2.793: 59.0383% ( 262) 00:13:22.071 2.793 - 2.807: 64.0715% ( 470) 00:13:22.071 2.807 - 2.822: 80.0493% ( 1492) 00:13:22.071 2.822 - 2.836: 91.6899% ( 1087) 00:13:22.071 2.836 - 2.851: 94.9561% ( 305) 00:13:22.071 2.851 - 2.865: 96.3161% ( 127) 00:13:22.071 2.865 - 2.880: 96.8730% ( 52) 00:13:22.071 2.880 - 2.895: 97.2799% ( 38) 00:13:22.071 2.895 - 2.909: 97.8689% ( 55) 00:13:22.071 2.909 - 2.924: 98.2437% ( 35) 00:13:22.071 2.924 - 2.938: 98.3937% ( 14) 00:13:22.071 2.938 - [2024-07-16 00:39:39.857733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.330 2.953: 98.4900% ( 9) 00:13:22.330 2.953 - 2.967: 98.5757% ( 8) 00:13:22.330 2.967 - 2.982: 98.6400% ( 6) 00:13:22.330 2.982 - 2.996: 98.6614% ( 2) 00:13:22.330 2.996 - 3.011: 98.7042% ( 4) 00:13:22.330 3.011 - 3.025: 98.7471% ( 4) 00:13:22.330 3.025 - 3.040: 98.7792% ( 3) 00:13:22.330 3.040 - 3.055: 98.7899% ( 1) 00:13:22.330 3.055 - 3.069: 98.8006% ( 1) 00:13:22.330 3.069 - 3.084: 98.8220% ( 2) 00:13:22.330 3.084 - 3.098: 98.8327% ( 1) 00:13:22.330 3.113 - 3.127: 98.8756% ( 4) 00:13:22.330 3.127 - 3.142: 98.8970% ( 2) 00:13:22.330 3.142 - 3.156: 98.9184% ( 2) 00:13:22.330 3.156 - 3.171: 98.9398% ( 2) 00:13:22.330 3.171 - 3.185: 98.9612% ( 2) 00:13:22.330 3.258 - 3.273: 98.9719% ( 1) 00:13:22.330 3.316 - 3.331: 98.9934% ( 2) 00:13:22.330 3.433 - 3.447: 99.0041% ( 1) 00:13:22.330 3.476 - 3.491: 99.0148% ( 1) 00:13:22.330 3.709 - 3.724: 99.0255% ( 1) 00:13:22.330 3.753 - 3.782: 99.0362% ( 1) 00:13:22.330 5.818 - 5.847: 99.0469% ( 1) 00:13:22.330 5.905 - 5.935: 99.0576% ( 1) 00:13:22.330 6.080 - 6.109: 99.0790% ( 2) 00:13:22.330 6.371 - 6.400: 99.0897% ( 1) 00:13:22.330 6.516 - 6.545: 99.1004% ( 1) 00:13:22.330 6.575 - 6.604: 99.1112% ( 1) 00:13:22.330 6.720 - 6.749: 99.1219% ( 1) 00:13:22.330 7.564 - 7.622: 99.1326% ( 1) 00:13:22.330 7.622 - 7.680: 99.1433% ( 1) 00:13:22.330 7.680 - 7.738: 99.1540% ( 1) 00:13:22.330 7.738 - 7.796: 99.1647% ( 1) 00:13:22.330 7.796 - 7.855: 99.1754% ( 1) 00:13:22.330 8.145 - 8.204: 99.1861% ( 1) 00:13:22.330 8.204 - 8.262: 99.1968% ( 1) 00:13:22.330 15.360 - 15.476: 99.2075% ( 1) 00:13:22.330 3991.738 - 4021.527: 99.9786% ( 72) 00:13:22.330 4021.527 - 4051.316: 99.9893% ( 1) 00:13:22.330 6970.647 - 7000.436: 100.0000% ( 1) 00:13:22.330 00:13:22.330 00:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:22.330 00:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:22.330 00:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:22.330 00:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:22.330 00:39:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:22.330 [ 00:13:22.330 { 00:13:22.330 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:22.330 "subtype": "Discovery", 00:13:22.330 "listen_addresses": [], 00:13:22.330 "allow_any_host": true, 00:13:22.330 "hosts": [] 00:13:22.330 }, 00:13:22.330 { 00:13:22.330 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:22.330 "subtype": "NVMe", 00:13:22.330 "listen_addresses": [ 00:13:22.330 { 00:13:22.330 "trtype": "VFIOUSER", 00:13:22.330 "adrfam": "IPv4", 00:13:22.330 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:22.330 "trsvcid": "0" 00:13:22.330 } 00:13:22.330 ], 00:13:22.330 "allow_any_host": true, 00:13:22.330 "hosts": [], 00:13:22.330 "serial_number": "SPDK1", 00:13:22.330 "model_number": "SPDK bdev Controller", 00:13:22.330 "max_namespaces": 32, 00:13:22.330 "min_cntlid": 1, 00:13:22.330 "max_cntlid": 65519, 00:13:22.330 "namespaces": [ 00:13:22.330 { 00:13:22.330 "nsid": 1, 00:13:22.330 "bdev_name": "Malloc1", 00:13:22.330 "name": "Malloc1", 00:13:22.330 "nguid": "D856BDD6AF5C4DEBBDFF5E0CA7837E90", 00:13:22.330 "uuid": "d856bdd6-af5c-4deb-bdff-5e0ca7837e90" 00:13:22.330 }, 00:13:22.330 { 00:13:22.330 "nsid": 2, 00:13:22.330 "bdev_name": "Malloc3", 00:13:22.330 "name": "Malloc3", 00:13:22.330 "nguid": "84E34CBC35744E289B5877E6389AEA65", 00:13:22.330 "uuid": "84e34cbc-3574-4e28-9b58-77e6389aea65" 00:13:22.330 } 00:13:22.330 ] 00:13:22.330 }, 00:13:22.330 { 00:13:22.330 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:22.330 "subtype": "NVMe", 00:13:22.330 "listen_addresses": [ 00:13:22.330 { 00:13:22.330 "trtype": "VFIOUSER", 00:13:22.330 "adrfam": "IPv4", 00:13:22.330 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:22.330 "trsvcid": "0" 00:13:22.330 } 00:13:22.330 ], 00:13:22.330 "allow_any_host": true, 00:13:22.330 "hosts": [], 00:13:22.330 "serial_number": "SPDK2", 00:13:22.330 "model_number": "SPDK bdev Controller", 00:13:22.330 "max_namespaces": 32, 00:13:22.330 "min_cntlid": 1, 00:13:22.330 "max_cntlid": 65519, 00:13:22.330 "namespaces": [ 00:13:22.330 { 00:13:22.330 "nsid": 1, 00:13:22.330 "bdev_name": "Malloc2", 00:13:22.330 "name": "Malloc2", 00:13:22.330 "nguid": "3FE539CF8A6340318A06863EDA5F0702", 00:13:22.330 "uuid": "3fe539cf-8a63-4031-8a06-863eda5f0702" 00:13:22.330 } 00:13:22.330 ] 00:13:22.330 } 00:13:22.330 ] 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2964109 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:22.589 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.589 Malloc4 00:13:22.589 [2024-07-16 00:39:40.373090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.589 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:22.848 [2024-07-16 00:39:40.550753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.848 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:22.848 Asynchronous Event Request test 00:13:22.848 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.848 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.848 Registering asynchronous event callbacks... 00:13:22.848 Starting namespace attribute notice tests for all controllers... 00:13:22.848 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:22.848 aer_cb - Changed Namespace 00:13:22.848 Cleaning up... 00:13:23.107 [ 00:13:23.107 { 00:13:23.107 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.107 "subtype": "Discovery", 00:13:23.107 "listen_addresses": [], 00:13:23.107 "allow_any_host": true, 00:13:23.107 "hosts": [] 00:13:23.107 }, 00:13:23.107 { 00:13:23.107 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.107 "subtype": "NVMe", 00:13:23.107 "listen_addresses": [ 00:13:23.107 { 00:13:23.107 "trtype": "VFIOUSER", 00:13:23.107 "adrfam": "IPv4", 00:13:23.107 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.107 "trsvcid": "0" 00:13:23.107 } 00:13:23.107 ], 00:13:23.107 "allow_any_host": true, 00:13:23.107 "hosts": [], 00:13:23.107 "serial_number": "SPDK1", 00:13:23.107 "model_number": "SPDK bdev Controller", 00:13:23.107 "max_namespaces": 32, 00:13:23.107 "min_cntlid": 1, 00:13:23.107 "max_cntlid": 65519, 00:13:23.107 "namespaces": [ 00:13:23.107 { 00:13:23.107 "nsid": 1, 00:13:23.107 "bdev_name": "Malloc1", 00:13:23.107 "name": "Malloc1", 00:13:23.107 "nguid": "D856BDD6AF5C4DEBBDFF5E0CA7837E90", 00:13:23.107 "uuid": "d856bdd6-af5c-4deb-bdff-5e0ca7837e90" 00:13:23.107 }, 00:13:23.107 { 00:13:23.107 "nsid": 2, 00:13:23.107 "bdev_name": "Malloc3", 00:13:23.107 "name": "Malloc3", 00:13:23.107 "nguid": "84E34CBC35744E289B5877E6389AEA65", 00:13:23.108 "uuid": "84e34cbc-3574-4e28-9b58-77e6389aea65" 00:13:23.108 } 00:13:23.108 ] 00:13:23.108 }, 00:13:23.108 { 00:13:23.108 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.108 "subtype": "NVMe", 00:13:23.108 "listen_addresses": [ 00:13:23.108 { 00:13:23.108 "trtype": "VFIOUSER", 00:13:23.108 "adrfam": "IPv4", 00:13:23.108 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.108 "trsvcid": "0" 00:13:23.108 } 00:13:23.108 ], 00:13:23.108 "allow_any_host": true, 00:13:23.108 "hosts": [], 00:13:23.108 "serial_number": "SPDK2", 00:13:23.108 "model_number": "SPDK bdev Controller", 00:13:23.108 "max_namespaces": 32, 00:13:23.108 "min_cntlid": 1, 00:13:23.108 "max_cntlid": 65519, 00:13:23.108 "namespaces": [ 00:13:23.108 { 00:13:23.108 "nsid": 1, 00:13:23.108 "bdev_name": "Malloc2", 00:13:23.108 "name": "Malloc2", 00:13:23.108 "nguid": "3FE539CF8A6340318A06863EDA5F0702", 00:13:23.108 "uuid": "3fe539cf-8a63-4031-8a06-863eda5f0702" 00:13:23.108 }, 00:13:23.108 { 00:13:23.108 "nsid": 2, 00:13:23.108 "bdev_name": "Malloc4", 00:13:23.108 "name": "Malloc4", 00:13:23.108 "nguid": "F108C52D7A40458AA517DB2D4ACFECAC", 00:13:23.108 "uuid": "f108c52d-7a40-458a-a517-db2d4acfecac" 00:13:23.108 } 00:13:23.108 ] 00:13:23.108 } 00:13:23.108 ] 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2964109 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2954842 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2954842 ']' 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2954842 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2954842 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2954842' 00:13:23.108 killing process with pid 2954842 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2954842 00:13:23.108 00:39:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2954842 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2964323 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2964323' 00:13:23.367 Process pid: 2964323 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2964323 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2964323 ']' 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.367 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:23.626 [2024-07-16 00:39:41.242281] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:23.626 [2024-07-16 00:39:41.243556] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:23.627 [2024-07-16 00:39:41.243596] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.627 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.627 [2024-07-16 00:39:41.326777] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.627 [2024-07-16 00:39:41.411615] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.627 [2024-07-16 00:39:41.411661] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.627 [2024-07-16 00:39:41.411672] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.627 [2024-07-16 00:39:41.411681] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.627 [2024-07-16 00:39:41.411689] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.627 [2024-07-16 00:39:41.411791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.627 [2024-07-16 00:39:41.411903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.627 [2024-07-16 00:39:41.411991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.627 [2024-07-16 00:39:41.411991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.886 [2024-07-16 00:39:41.495763] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:23.886 [2024-07-16 00:39:41.496294] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:23.886 [2024-07-16 00:39:41.496424] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:23.886 [2024-07-16 00:39:41.496476] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:23.886 [2024-07-16 00:39:41.496683] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:23.886 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.886 00:39:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:23.886 00:39:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:24.823 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:25.081 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:25.081 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:25.081 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:25.081 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:25.081 00:39:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:25.340 Malloc1 00:13:25.340 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:25.597 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:25.855 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:26.113 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.113 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:26.113 00:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:26.370 Malloc2 00:13:26.370 00:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:26.629 00:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:26.887 00:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2964323 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2964323 ']' 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2964323 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2964323 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2964323' 00:13:27.146 killing process with pid 2964323 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2964323 00:13:27.146 00:39:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2964323 00:13:27.405 00:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:27.405 00:39:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.405 00:13:27.405 real 0m54.407s 00:13:27.405 user 3m35.141s 00:13:27.405 sys 0m4.037s 00:13:27.405 00:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.405 00:39:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.405 ************************************ 00:13:27.405 END TEST nvmf_vfio_user 00:13:27.405 ************************************ 00:13:27.405 00:39:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.405 00:39:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:27.405 00:39:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.405 00:39:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.405 00:39:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.665 ************************************ 00:13:27.665 START TEST nvmf_vfio_user_nvme_compliance 00:13:27.665 ************************************ 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:27.666 * Looking for test storage... 00:13:27.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2965181 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2965181' 00:13:27.666 Process pid: 2965181 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2965181 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2965181 ']' 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.666 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:27.666 [2024-07-16 00:39:45.441963] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:13:27.666 [2024-07-16 00:39:45.442019] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.666 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.926 [2024-07-16 00:39:45.523730] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.926 [2024-07-16 00:39:45.615046] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.926 [2024-07-16 00:39:45.615088] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.926 [2024-07-16 00:39:45.615098] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.926 [2024-07-16 00:39:45.615107] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.926 [2024-07-16 00:39:45.615114] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.926 [2024-07-16 00:39:45.615168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.926 [2024-07-16 00:39:45.615289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.926 [2024-07-16 00:39:45.615290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.926 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.926 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:27.926 00:39:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 malloc0 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 00:39:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:29.305 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.305 00:13:29.305 00:13:29.305 CUnit - A unit testing framework for C - Version 2.1-3 00:13:29.305 http://cunit.sourceforge.net/ 00:13:29.305 00:13:29.305 00:13:29.305 Suite: nvme_compliance 00:13:29.305 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 00:39:46.997997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.305 [2024-07-16 00:39:46.999509] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:29.305 [2024-07-16 00:39:46.999533] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:29.305 [2024-07-16 00:39:46.999545] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:29.305 [2024-07-16 00:39:47.001028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.305 passed 00:13:29.305 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 00:39:47.104101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.305 [2024-07-16 00:39:47.107135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.564 passed 00:13:29.564 Test: admin_identify_ns ...[2024-07-16 00:39:47.215823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.564 [2024-07-16 00:39:47.275277] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:29.564 [2024-07-16 00:39:47.283276] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:29.564 [2024-07-16 00:39:47.304422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.564 passed 00:13:29.822 Test: admin_get_features_mandatory_features ...[2024-07-16 00:39:47.403936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.822 [2024-07-16 00:39:47.408995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.822 passed 00:13:29.822 Test: admin_get_features_optional_features ...[2024-07-16 00:39:47.509019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:29.822 [2024-07-16 00:39:47.513072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:29.822 passed 00:13:29.822 Test: admin_set_features_number_of_queues ...[2024-07-16 00:39:47.614312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.081 [2024-07-16 00:39:47.718417] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.081 passed 00:13:30.081 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 00:39:47.820085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.081 [2024-07-16 00:39:47.823136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.081 passed 00:13:30.340 Test: admin_get_log_page_with_lpo ...[2024-07-16 00:39:47.925339] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.340 [2024-07-16 00:39:47.995286] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:30.340 [2024-07-16 00:39:48.008349] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.340 passed 00:13:30.340 Test: fabric_property_get ...[2024-07-16 00:39:48.107867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.340 [2024-07-16 00:39:48.109236] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:30.340 [2024-07-16 00:39:48.110896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.340 passed 00:13:30.599 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 00:39:48.217068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.599 [2024-07-16 00:39:48.218540] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:30.599 [2024-07-16 00:39:48.220108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.599 passed 00:13:30.599 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 00:39:48.319711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.599 [2024-07-16 00:39:48.403268] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.599 [2024-07-16 00:39:48.417279] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:30.599 [2024-07-16 00:39:48.422372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.903 passed 00:13:30.903 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 00:39:48.523025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.903 [2024-07-16 00:39:48.524519] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:30.903 [2024-07-16 00:39:48.526060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.903 passed 00:13:30.903 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 00:39:48.627314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.903 [2024-07-16 00:39:48.704269] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:31.179 [2024-07-16 00:39:48.728269] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:31.179 [2024-07-16 00:39:48.733378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.179 passed 00:13:31.180 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 00:39:48.838140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.180 [2024-07-16 00:39:48.839640] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:31.180 [2024-07-16 00:39:48.839701] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:31.180 [2024-07-16 00:39:48.841186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.180 passed 00:13:31.180 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 00:39:48.944332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.486 [2024-07-16 00:39:49.037271] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:31.486 [2024-07-16 00:39:49.045269] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:31.486 [2024-07-16 00:39:49.053272] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:31.486 [2024-07-16 00:39:49.061279] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:31.486 [2024-07-16 00:39:49.090362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.486 passed 00:13:31.486 Test: admin_create_io_sq_verify_pc ...[2024-07-16 00:39:49.192114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.486 [2024-07-16 00:39:49.208280] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:31.486 [2024-07-16 00:39:49.226355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.486 passed 00:13:31.744 Test: admin_create_io_qp_max_qps ...[2024-07-16 00:39:49.329396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.681 [2024-07-16 00:39:50.431270] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:33.249 [2024-07-16 00:39:50.811946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.249 passed 00:13:33.249 Test: admin_create_io_sq_shared_cq ...[2024-07-16 00:39:50.913658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.249 [2024-07-16 00:39:51.049268] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:33.249 [2024-07-16 00:39:51.086344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.508 passed 00:13:33.508 00:13:33.508 Run Summary: Type Total Ran Passed Failed Inactive 00:13:33.508 suites 1 1 n/a 0 0 00:13:33.508 tests 18 18 18 0 0 00:13:33.508 asserts 360 360 360 0 n/a 00:13:33.508 00:13:33.508 Elapsed time = 1.734 seconds 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2965181 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2965181 ']' 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2965181 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2965181 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2965181' 00:13:33.508 killing process with pid 2965181 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2965181 00:13:33.508 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2965181 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:33.767 00:13:33.767 real 0m6.167s 00:13:33.767 user 0m17.379s 00:13:33.767 sys 0m0.490s 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.767 ************************************ 00:13:33.767 END TEST nvmf_vfio_user_nvme_compliance 00:13:33.767 ************************************ 00:13:33.767 00:39:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:33.767 00:39:51 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:33.767 00:39:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.767 00:39:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.767 00:39:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.767 ************************************ 00:13:33.767 START TEST nvmf_vfio_user_fuzz 00:13:33.767 ************************************ 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:33.767 * Looking for test storage... 00:13:33.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.767 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2966299 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2966299' 00:13:34.026 Process pid: 2966299 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2966299 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2966299 ']' 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:34.026 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:34.284 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.284 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:34.284 00:39:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.219 malloc0 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.219 00:39:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.219 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.219 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:35.220 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.220 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:35.220 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.220 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:35.220 00:39:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:07.296 Fuzzing completed. Shutting down the fuzz application 00:14:07.296 00:14:07.296 Dumping successful admin opcodes: 00:14:07.296 8, 9, 10, 24, 00:14:07.296 Dumping successful io opcodes: 00:14:07.296 0, 00:14:07.296 NS: 0x200003a1ef00 I/O qp, Total commands completed: 587696, total successful commands: 2264, random_seed: 455863872 00:14:07.296 NS: 0x200003a1ef00 admin qp, Total commands completed: 143892, total successful commands: 1170, random_seed: 2560253440 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2966299 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2966299 ']' 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2966299 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2966299 00:14:07.296 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2966299' 00:14:07.297 killing process with pid 2966299 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2966299 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2966299 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:07.297 00:14:07.297 real 0m32.362s 00:14:07.297 user 0m36.934s 00:14:07.297 sys 0m24.310s 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.297 00:40:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.297 ************************************ 00:14:07.297 END TEST nvmf_vfio_user_fuzz 00:14:07.297 ************************************ 00:14:07.297 00:40:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.297 00:40:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:07.297 00:40:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.297 00:40:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.297 00:40:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.297 ************************************ 00:14:07.297 START TEST nvmf_host_management 00:14:07.297 ************************************ 00:14:07.297 00:40:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:07.297 * Looking for test storage... 00:14:07.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.297 00:40:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:12.569 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:12.569 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:12.569 Found net devices under 0000:af:00.0: cvl_0_0 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:12.569 Found net devices under 0000:af:00.1: cvl_0_1 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.569 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:14:12.570 00:14:12.570 --- 10.0.0.2 ping statistics --- 00:14:12.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.570 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:14:12.570 00:14:12.570 --- 10.0.0.1 ping statistics --- 00:14:12.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.570 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2975232 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2975232 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2975232 ']' 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.570 00:40:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:12.570 [2024-07-16 00:40:29.987588] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:12.570 [2024-07-16 00:40:29.987643] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.570 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.570 [2024-07-16 00:40:30.081178] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.570 [2024-07-16 00:40:30.191290] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.570 [2024-07-16 00:40:30.191338] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.570 [2024-07-16 00:40:30.191351] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.570 [2024-07-16 00:40:30.191362] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.570 [2024-07-16 00:40:30.191372] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.570 [2024-07-16 00:40:30.191493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.570 [2024-07-16 00:40:30.191604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.570 [2024-07-16 00:40:30.191716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.570 [2024-07-16 00:40:30.191718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.138 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 [2024-07-16 00:40:30.982180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.397 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.397 00:40:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:13.397 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.397 00:40:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 Malloc0 00:14:13.397 [2024-07-16 00:40:31.057124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2975505 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2975505 /var/tmp/bdevperf.sock 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2975505 ']' 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.397 { 00:14:13.397 "params": { 00:14:13.397 "name": "Nvme$subsystem", 00:14:13.397 "trtype": "$TEST_TRANSPORT", 00:14:13.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.397 "adrfam": "ipv4", 00:14:13.397 "trsvcid": "$NVMF_PORT", 00:14:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.397 "hdgst": ${hdgst:-false}, 00:14:13.397 "ddgst": ${ddgst:-false} 00:14:13.397 }, 00:14:13.397 "method": "bdev_nvme_attach_controller" 00:14:13.397 } 00:14:13.397 EOF 00:14:13.397 )") 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:13.397 00:40:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.397 "params": { 00:14:13.397 "name": "Nvme0", 00:14:13.397 "trtype": "tcp", 00:14:13.397 "traddr": "10.0.0.2", 00:14:13.397 "adrfam": "ipv4", 00:14:13.397 "trsvcid": "4420", 00:14:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.397 "hdgst": false, 00:14:13.397 "ddgst": false 00:14:13.397 }, 00:14:13.397 "method": "bdev_nvme_attach_controller" 00:14:13.397 }' 00:14:13.397 [2024-07-16 00:40:31.154402] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:13.397 [2024-07-16 00:40:31.154465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975505 ] 00:14:13.397 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.656 [2024-07-16 00:40:31.236558] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.656 [2024-07-16 00:40:31.323240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.914 Running I/O for 10 seconds... 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.485 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.485 [2024-07-16 00:40:32.106030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.485 [2024-07-16 00:40:32.106073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.485 [2024-07-16 00:40:32.106087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.485 [2024-07-16 00:40:32.106097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.485 [2024-07-16 00:40:32.106108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.486 [2024-07-16 00:40:32.106124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.106134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.486 [2024-07-16 00:40:32.106144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.106154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbe080 is same with the state(5) to be set 00:14:14.486 [2024-07-16 00:40:32.109501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.109977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.109988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.486 [2024-07-16 00:40:32.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.486 [2024-07-16 00:40:32.110445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.486 [2024-07-16 00:40:32.110459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.487 [2024-07-16 00:40:32.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.110981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.110992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.487 [2024-07-16 00:40:32.111007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.111022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.111035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.111046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.111060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.111071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.111085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.487 [2024-07-16 00:40:32.111096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.487 [2024-07-16 00:40:32.111108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf220 is same with the state(5) to be set 00:14:14.487 [2024-07-16 00:40:32.111166] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21cf220 was disconnected and freed. reset controller. 00:14:14.487 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:14.487 [2024-07-16 00:40:32.112555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:14.487 task offset: 65408 on job bdev=Nvme0n1 fails 00:14:14.487 00:14:14.487 Latency(us) 00:14:14.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.487 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.487 Job: Nvme0n1 ended in about 0.47 seconds with error 00:14:14.487 Verification LBA range: start 0x0 length 0x400 00:14:14.487 Nvme0n1 : 0.47 958.74 59.92 136.96 0.00 56585.37 11200.70 53143.74 00:14:14.487 =================================================================================================================== 00:14:14.487 Total : 958.74 59.92 136.96 0.00 56585.37 11200.70 53143.74 00:14:14.487 [2024-07-16 00:40:32.114932] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:14.487 [2024-07-16 00:40:32.114952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dbe080 (9): Bad file descriptor 00:14:14.487 00:40:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.487 00:40:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:14.487 [2024-07-16 00:40:32.125553] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2975505 00:14:15.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2975505) - No such process 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:15.438 { 00:14:15.438 "params": { 00:14:15.438 "name": "Nvme$subsystem", 00:14:15.438 "trtype": "$TEST_TRANSPORT", 00:14:15.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.438 "adrfam": "ipv4", 00:14:15.438 "trsvcid": "$NVMF_PORT", 00:14:15.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.438 "hdgst": ${hdgst:-false}, 00:14:15.438 "ddgst": ${ddgst:-false} 00:14:15.438 }, 00:14:15.438 "method": "bdev_nvme_attach_controller" 00:14:15.438 } 00:14:15.438 EOF 00:14:15.438 )") 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:15.438 00:40:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:15.438 "params": { 00:14:15.438 "name": "Nvme0", 00:14:15.438 "trtype": "tcp", 00:14:15.438 "traddr": "10.0.0.2", 00:14:15.438 "adrfam": "ipv4", 00:14:15.438 "trsvcid": "4420", 00:14:15.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.438 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:15.438 "hdgst": false, 00:14:15.438 "ddgst": false 00:14:15.438 }, 00:14:15.438 "method": "bdev_nvme_attach_controller" 00:14:15.438 }' 00:14:15.438 [2024-07-16 00:40:33.177225] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:15.438 [2024-07-16 00:40:33.177292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2975960 ] 00:14:15.438 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.438 [2024-07-16 00:40:33.259296] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.696 [2024-07-16 00:40:33.347285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.954 Running I/O for 1 seconds... 00:14:16.891 00:14:16.891 Latency(us) 00:14:16.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.891 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:16.891 Verification LBA range: start 0x0 length 0x400 00:14:16.891 Nvme0n1 : 1.05 1094.47 68.40 0.00 0.00 57418.65 8519.68 53143.74 00:14:16.891 =================================================================================================================== 00:14:16.891 Total : 1094.47 68.40 0.00 0.00 57418.65 8519.68 53143.74 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.151 rmmod nvme_tcp 00:14:17.151 rmmod nvme_fabrics 00:14:17.151 rmmod nvme_keyring 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2975232 ']' 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2975232 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2975232 ']' 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2975232 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2975232 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2975232' 00:14:17.151 killing process with pid 2975232 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2975232 00:14:17.151 00:40:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2975232 00:14:17.410 [2024-07-16 00:40:35.161529] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.410 00:40:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.948 00:40:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.948 00:40:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:19.948 00:14:19.948 real 0m13.336s 00:14:19.948 user 0m24.312s 00:14:19.948 sys 0m5.607s 00:14:19.948 00:40:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.948 00:40:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.948 ************************************ 00:14:19.948 END TEST nvmf_host_management 00:14:19.948 ************************************ 00:14:19.948 00:40:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.948 00:40:37 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.948 00:40:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.948 00:40:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.948 00:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.948 ************************************ 00:14:19.948 START TEST nvmf_lvol 00:14:19.948 ************************************ 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.948 * Looking for test storage... 00:14:19.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.948 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.949 00:40:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:25.220 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:25.220 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:25.220 Found net devices under 0000:af:00.0: cvl_0_0 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:25.220 Found net devices under 0000:af:00.1: cvl_0_1 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.220 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.221 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.221 00:40:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:14:25.479 00:14:25.479 --- 10.0.0.2 ping statistics --- 00:14:25.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.479 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:14:25.479 00:14:25.479 --- 10.0.0.1 ping statistics --- 00:14:25.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.479 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2979784 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2979784 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2979784 ']' 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.479 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.480 00:40:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:25.738 [2024-07-16 00:40:43.334638] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:25.738 [2024-07-16 00:40:43.334694] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.738 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.738 [2024-07-16 00:40:43.422073] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.738 [2024-07-16 00:40:43.513208] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.738 [2024-07-16 00:40:43.513252] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.738 [2024-07-16 00:40:43.513268] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.738 [2024-07-16 00:40:43.513277] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.738 [2024-07-16 00:40:43.513285] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.738 [2024-07-16 00:40:43.513337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.738 [2024-07-16 00:40:43.513450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.738 [2024-07-16 00:40:43.513450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.674 [2024-07-16 00:40:44.461333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.674 00:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.241 00:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:27.241 00:40:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:27.241 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:27.241 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:27.500 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:27.791 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4e257191-cd8e-488b-ad9f-0ce4ffc9b9b6 00:14:27.791 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e257191-cd8e-488b-ad9f-0ce4ffc9b9b6 lvol 20 00:14:28.050 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=faac4ed8-8049-436e-ba10-d06721bb7045 00:14:28.050 00:40:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:28.309 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 faac4ed8-8049-436e-ba10-d06721bb7045 00:14:28.568 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:28.826 [2024-07-16 00:40:46.571643] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.826 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.084 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2980586 00:14:29.084 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:29.084 00:40:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:29.084 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.020 00:40:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot faac4ed8-8049-436e-ba10-d06721bb7045 MY_SNAPSHOT 00:14:30.588 00:40:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e9910b09-4ed6-497e-b162-ee118327e521 00:14:30.588 00:40:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize faac4ed8-8049-436e-ba10-d06721bb7045 30 00:14:30.847 00:40:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e9910b09-4ed6-497e-b162-ee118327e521 MY_CLONE 00:14:31.107 00:40:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9e8364ea-aa7e-46f8-bc43-c030aa70c5c1 00:14:31.107 00:40:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9e8364ea-aa7e-46f8-bc43-c030aa70c5c1 00:14:31.675 00:40:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2980586 00:14:39.882 Initializing NVMe Controllers 00:14:39.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:39.882 Controller IO queue size 128, less than required. 00:14:39.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:39.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:39.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:39.882 Initialization complete. Launching workers. 00:14:39.882 ======================================================== 00:14:39.882 Latency(us) 00:14:39.882 Device Information : IOPS MiB/s Average min max 00:14:39.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7001.10 27.35 18291.27 4358.68 114207.46 00:14:39.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8627.80 33.70 14837.36 4477.83 67874.13 00:14:39.882 ======================================================== 00:14:39.882 Total : 15628.89 61.05 16384.57 4358.68 114207.46 00:14:39.882 00:14:39.882 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:39.882 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete faac4ed8-8049-436e-ba10-d06721bb7045 00:14:40.141 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e257191-cd8e-488b-ad9f-0ce4ffc9b9b6 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.400 00:40:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.400 rmmod nvme_tcp 00:14:40.400 rmmod nvme_fabrics 00:14:40.400 rmmod nvme_keyring 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2979784 ']' 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2979784 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2979784 ']' 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2979784 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2979784 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2979784' 00:14:40.400 killing process with pid 2979784 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2979784 00:14:40.400 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2979784 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.658 00:40:58 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:43.191 00:14:43.191 real 0m23.090s 00:14:43.191 user 1m8.117s 00:14:43.191 sys 0m7.351s 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:43.191 ************************************ 00:14:43.191 END TEST nvmf_lvol 00:14:43.191 ************************************ 00:14:43.191 00:41:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:43.191 00:41:00 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:43.191 00:41:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:43.191 00:41:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.191 00:41:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:43.191 ************************************ 00:14:43.191 START TEST nvmf_lvs_grow 00:14:43.191 ************************************ 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:43.191 * Looking for test storage... 00:14:43.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:14:43.191 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:43.192 00:41:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:49.776 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:49.776 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:49.777 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:49.777 Found net devices under 0000:af:00.0: cvl_0_0 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:49.777 Found net devices under 0000:af:00.1: cvl_0_1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:49.777 00:14:49.777 --- 10.0.0.2 ping statistics --- 00:14:49.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.777 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:14:49.777 00:14:49.777 --- 10.0.0.1 ping statistics --- 00:14:49.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.777 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2986142 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2986142 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2986142 ']' 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.777 00:41:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:49.777 [2024-07-16 00:41:06.749276] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:49.777 [2024-07-16 00:41:06.749339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.777 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.777 [2024-07-16 00:41:06.837852] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.777 [2024-07-16 00:41:06.928074] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.777 [2024-07-16 00:41:06.928117] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.777 [2024-07-16 00:41:06.928127] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.777 [2024-07-16 00:41:06.928136] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.777 [2024-07-16 00:41:06.928144] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.777 [2024-07-16 00:41:06.928166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.346 00:41:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:50.606 [2024-07-16 00:41:08.203740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:50.606 ************************************ 00:14:50.606 START TEST lvs_grow_clean 00:14:50.606 ************************************ 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:50.606 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.865 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:50.865 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:51.125 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:14:51.125 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:14:51.125 00:41:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:51.384 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:51.384 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:51.384 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dea2da0d-5f47-42d4-b7b8-befe48a422bc lvol 150 00:14:51.644 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fdbdbfd5-f14d-4c70-a738-0c73e4247963 00:14:51.644 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:51.644 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:51.903 [2024-07-16 00:41:09.506942] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:51.903 [2024-07-16 00:41:09.507005] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:51.903 true 00:14:51.903 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:14:51.903 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:52.162 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:52.162 00:41:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:52.420 00:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fdbdbfd5-f14d-4c70-a738-0c73e4247963 00:14:52.679 00:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:52.939 [2024-07-16 00:41:10.558133] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.939 00:41:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2986973 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2986973 /var/tmp/bdevperf.sock 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2986973 ']' 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.508 00:41:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:53.508 [2024-07-16 00:41:11.100597] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:14:53.508 [2024-07-16 00:41:11.100654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986973 ] 00:14:53.508 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.508 [2024-07-16 00:41:11.182918] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.508 [2024-07-16 00:41:11.283275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.445 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.445 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:54.445 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:54.704 Nvme0n1 00:14:54.704 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:54.964 [ 00:14:54.964 { 00:14:54.964 "name": "Nvme0n1", 00:14:54.964 "aliases": [ 00:14:54.964 "fdbdbfd5-f14d-4c70-a738-0c73e4247963" 00:14:54.964 ], 00:14:54.964 "product_name": "NVMe disk", 00:14:54.964 "block_size": 4096, 00:14:54.964 "num_blocks": 38912, 00:14:54.964 "uuid": "fdbdbfd5-f14d-4c70-a738-0c73e4247963", 00:14:54.964 "assigned_rate_limits": { 00:14:54.964 "rw_ios_per_sec": 0, 00:14:54.964 "rw_mbytes_per_sec": 0, 00:14:54.964 "r_mbytes_per_sec": 0, 00:14:54.964 "w_mbytes_per_sec": 0 00:14:54.964 }, 00:14:54.964 "claimed": false, 00:14:54.964 "zoned": false, 00:14:54.964 "supported_io_types": { 00:14:54.964 "read": true, 00:14:54.964 "write": true, 00:14:54.964 "unmap": true, 00:14:54.964 "flush": true, 00:14:54.964 "reset": true, 00:14:54.964 "nvme_admin": true, 00:14:54.964 "nvme_io": true, 00:14:54.964 "nvme_io_md": false, 00:14:54.964 "write_zeroes": true, 00:14:54.964 "zcopy": false, 00:14:54.964 "get_zone_info": false, 00:14:54.964 "zone_management": false, 00:14:54.964 "zone_append": false, 00:14:54.964 "compare": true, 00:14:54.964 "compare_and_write": true, 00:14:54.964 "abort": true, 00:14:54.964 "seek_hole": false, 00:14:54.964 "seek_data": false, 00:14:54.964 "copy": true, 00:14:54.964 "nvme_iov_md": false 00:14:54.964 }, 00:14:54.964 "memory_domains": [ 00:14:54.964 { 00:14:54.964 "dma_device_id": "system", 00:14:54.964 "dma_device_type": 1 00:14:54.964 } 00:14:54.964 ], 00:14:54.964 "driver_specific": { 00:14:54.964 "nvme": [ 00:14:54.964 { 00:14:54.964 "trid": { 00:14:54.964 "trtype": "TCP", 00:14:54.964 "adrfam": "IPv4", 00:14:54.964 "traddr": "10.0.0.2", 00:14:54.964 "trsvcid": "4420", 00:14:54.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:54.964 }, 00:14:54.964 "ctrlr_data": { 00:14:54.964 "cntlid": 1, 00:14:54.964 "vendor_id": "0x8086", 00:14:54.964 "model_number": "SPDK bdev Controller", 00:14:54.964 "serial_number": "SPDK0", 00:14:54.964 "firmware_revision": "24.09", 00:14:54.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:54.964 "oacs": { 00:14:54.964 "security": 0, 00:14:54.964 "format": 0, 00:14:54.964 "firmware": 0, 00:14:54.964 "ns_manage": 0 00:14:54.964 }, 00:14:54.964 "multi_ctrlr": true, 00:14:54.964 "ana_reporting": false 00:14:54.964 }, 00:14:54.964 "vs": { 00:14:54.964 "nvme_version": "1.3" 00:14:54.964 }, 00:14:54.964 "ns_data": { 00:14:54.964 "id": 1, 00:14:54.964 "can_share": true 00:14:54.964 } 00:14:54.964 } 00:14:54.964 ], 00:14:54.964 "mp_policy": "active_passive" 00:14:54.964 } 00:14:54.964 } 00:14:54.964 ] 00:14:54.964 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2987238 00:14:54.965 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:54.965 00:41:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.965 Running I/O for 10 seconds... 00:14:56.342 Latency(us) 00:14:56.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.342 Nvme0n1 : 1.00 15247.00 59.56 0.00 0.00 0.00 0.00 0.00 00:14:56.342 =================================================================================================================== 00:14:56.342 Total : 15247.00 59.56 0.00 0.00 0.00 0.00 0.00 00:14:56.342 00:14:56.908 00:41:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:14:57.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.165 Nvme0n1 : 2.00 15318.50 59.84 0.00 0.00 0.00 0.00 0.00 00:14:57.165 =================================================================================================================== 00:14:57.166 Total : 15318.50 59.84 0.00 0.00 0.00 0.00 0.00 00:14:57.166 00:14:57.166 true 00:14:57.166 00:41:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:57.166 00:41:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:14:57.424 00:41:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:57.424 00:41:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:57.424 00:41:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2987238 00:14:57.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.991 Nvme0n1 : 3.00 15356.33 59.99 0.00 0.00 0.00 0.00 0.00 00:14:57.991 =================================================================================================================== 00:14:57.991 Total : 15356.33 59.99 0.00 0.00 0.00 0.00 0.00 00:14:57.991 00:14:59.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.370 Nvme0n1 : 4.00 15392.25 60.13 0.00 0.00 0.00 0.00 0.00 00:14:59.370 =================================================================================================================== 00:14:59.370 Total : 15392.25 60.13 0.00 0.00 0.00 0.00 0.00 00:14:59.370 00:14:59.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.940 Nvme0n1 : 5.00 15426.80 60.26 0.00 0.00 0.00 0.00 0.00 00:14:59.940 =================================================================================================================== 00:14:59.940 Total : 15426.80 60.26 0.00 0.00 0.00 0.00 0.00 00:14:59.940 00:15:01.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.319 Nvme0n1 : 6.00 15444.00 60.33 0.00 0.00 0.00 0.00 0.00 00:15:01.319 =================================================================================================================== 00:15:01.319 Total : 15444.00 60.33 0.00 0.00 0.00 0.00 0.00 00:15:01.319 00:15:02.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.256 Nvme0n1 : 7.00 15460.29 60.39 0.00 0.00 0.00 0.00 0.00 00:15:02.256 =================================================================================================================== 00:15:02.256 Total : 15460.29 60.39 0.00 0.00 0.00 0.00 0.00 00:15:02.256 00:15:03.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.193 Nvme0n1 : 8.00 15471.75 60.44 0.00 0.00 0.00 0.00 0.00 00:15:03.193 =================================================================================================================== 00:15:03.193 Total : 15471.75 60.44 0.00 0.00 0.00 0.00 0.00 00:15:03.193 00:15:04.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.129 Nvme0n1 : 9.00 15485.56 60.49 0.00 0.00 0.00 0.00 0.00 00:15:04.129 =================================================================================================================== 00:15:04.129 Total : 15485.56 60.49 0.00 0.00 0.00 0.00 0.00 00:15:04.129 00:15:05.066 00:15:05.066 Latency(us) 00:15:05.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.066 Nvme0n1 : 10.00 15490.98 60.51 0.00 0.00 8256.41 4051.32 16205.27 00:15:05.066 =================================================================================================================== 00:15:05.066 Total : 15490.98 60.51 0.00 0.00 8256.41 4051.32 16205.27 00:15:05.066 0 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2986973 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2986973 ']' 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2986973 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2986973 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2986973' 00:15:05.066 killing process with pid 2986973 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2986973 00:15:05.066 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.066 00:15:05.066 Latency(us) 00:15:05.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.066 =================================================================================================================== 00:15:05.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.066 00:41:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2986973 00:15:05.324 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.584 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:05.843 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:05.843 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:06.100 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:06.100 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:06.100 00:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:06.358 [2024-07-16 00:41:24.082129] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:06.358 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:06.358 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:06.358 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:06.359 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:06.617 request: 00:15:06.617 { 00:15:06.617 "uuid": "dea2da0d-5f47-42d4-b7b8-befe48a422bc", 00:15:06.617 "method": "bdev_lvol_get_lvstores", 00:15:06.617 "req_id": 1 00:15:06.617 } 00:15:06.617 Got JSON-RPC error response 00:15:06.617 response: 00:15:06.617 { 00:15:06.618 "code": -19, 00:15:06.618 "message": "No such device" 00:15:06.618 } 00:15:06.618 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:06.618 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:06.618 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:06.618 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:06.618 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:06.877 aio_bdev 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fdbdbfd5-f14d-4c70-a738-0c73e4247963 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=fdbdbfd5-f14d-4c70-a738-0c73e4247963 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.877 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:07.136 00:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fdbdbfd5-f14d-4c70-a738-0c73e4247963 -t 2000 00:15:07.395 [ 00:15:07.395 { 00:15:07.395 "name": "fdbdbfd5-f14d-4c70-a738-0c73e4247963", 00:15:07.395 "aliases": [ 00:15:07.395 "lvs/lvol" 00:15:07.395 ], 00:15:07.395 "product_name": "Logical Volume", 00:15:07.395 "block_size": 4096, 00:15:07.395 "num_blocks": 38912, 00:15:07.395 "uuid": "fdbdbfd5-f14d-4c70-a738-0c73e4247963", 00:15:07.395 "assigned_rate_limits": { 00:15:07.395 "rw_ios_per_sec": 0, 00:15:07.395 "rw_mbytes_per_sec": 0, 00:15:07.395 "r_mbytes_per_sec": 0, 00:15:07.395 "w_mbytes_per_sec": 0 00:15:07.395 }, 00:15:07.395 "claimed": false, 00:15:07.395 "zoned": false, 00:15:07.395 "supported_io_types": { 00:15:07.395 "read": true, 00:15:07.395 "write": true, 00:15:07.395 "unmap": true, 00:15:07.395 "flush": false, 00:15:07.395 "reset": true, 00:15:07.395 "nvme_admin": false, 00:15:07.395 "nvme_io": false, 00:15:07.395 "nvme_io_md": false, 00:15:07.395 "write_zeroes": true, 00:15:07.395 "zcopy": false, 00:15:07.395 "get_zone_info": false, 00:15:07.395 "zone_management": false, 00:15:07.395 "zone_append": false, 00:15:07.395 "compare": false, 00:15:07.395 "compare_and_write": false, 00:15:07.395 "abort": false, 00:15:07.395 "seek_hole": true, 00:15:07.395 "seek_data": true, 00:15:07.395 "copy": false, 00:15:07.395 "nvme_iov_md": false 00:15:07.395 }, 00:15:07.395 "driver_specific": { 00:15:07.395 "lvol": { 00:15:07.395 "lvol_store_uuid": "dea2da0d-5f47-42d4-b7b8-befe48a422bc", 00:15:07.395 "base_bdev": "aio_bdev", 00:15:07.395 "thin_provision": false, 00:15:07.395 "num_allocated_clusters": 38, 00:15:07.395 "snapshot": false, 00:15:07.395 "clone": false, 00:15:07.395 "esnap_clone": false 00:15:07.395 } 00:15:07.395 } 00:15:07.395 } 00:15:07.395 ] 00:15:07.395 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:07.395 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:07.395 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:07.654 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:07.654 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:07.654 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:07.912 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:07.912 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fdbdbfd5-f14d-4c70-a738-0c73e4247963 00:15:08.171 00:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dea2da0d-5f47-42d4-b7b8-befe48a422bc 00:15:08.430 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.689 00:15:08.689 real 0m18.035s 00:15:08.689 user 0m18.113s 00:15:08.689 sys 0m1.553s 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 ************************************ 00:15:08.689 END TEST lvs_grow_clean 00:15:08.689 ************************************ 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:08.689 ************************************ 00:15:08.689 START TEST lvs_grow_dirty 00:15:08.689 ************************************ 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:08.689 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:08.947 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:08.947 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:09.207 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:09.207 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:09.207 00:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:09.465 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:09.465 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:09.465 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u deace57b-1640-4de6-a012-a9d8c5b4a57b lvol 150 00:15:09.724 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:09.724 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:09.724 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:09.981 [2024-07-16 00:41:27.609907] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:09.981 [2024-07-16 00:41:27.609968] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:09.981 true 00:15:09.981 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:09.981 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:10.239 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:10.239 00:41:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:10.498 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:10.757 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:10.757 [2024-07-16 00:41:28.576832] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2990164 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2990164 /var/tmp/bdevperf.sock 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2990164 ']' 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.016 00:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.273 [2024-07-16 00:41:28.876963] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:11.273 [2024-07-16 00:41:28.877022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990164 ] 00:15:11.273 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.273 [2024-07-16 00:41:28.960406] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.273 [2024-07-16 00:41:29.062859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.206 00:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.206 00:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:12.206 00:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:12.465 Nvme0n1 00:15:12.465 00:41:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:12.723 [ 00:15:12.723 { 00:15:12.723 "name": "Nvme0n1", 00:15:12.723 "aliases": [ 00:15:12.723 "4bad4b51-1d94-4e15-9de9-a1e73780b5a0" 00:15:12.723 ], 00:15:12.723 "product_name": "NVMe disk", 00:15:12.723 "block_size": 4096, 00:15:12.723 "num_blocks": 38912, 00:15:12.723 "uuid": "4bad4b51-1d94-4e15-9de9-a1e73780b5a0", 00:15:12.723 "assigned_rate_limits": { 00:15:12.723 "rw_ios_per_sec": 0, 00:15:12.723 "rw_mbytes_per_sec": 0, 00:15:12.723 "r_mbytes_per_sec": 0, 00:15:12.723 "w_mbytes_per_sec": 0 00:15:12.723 }, 00:15:12.723 "claimed": false, 00:15:12.723 "zoned": false, 00:15:12.723 "supported_io_types": { 00:15:12.723 "read": true, 00:15:12.723 "write": true, 00:15:12.723 "unmap": true, 00:15:12.723 "flush": true, 00:15:12.723 "reset": true, 00:15:12.723 "nvme_admin": true, 00:15:12.723 "nvme_io": true, 00:15:12.723 "nvme_io_md": false, 00:15:12.723 "write_zeroes": true, 00:15:12.723 "zcopy": false, 00:15:12.723 "get_zone_info": false, 00:15:12.723 "zone_management": false, 00:15:12.723 "zone_append": false, 00:15:12.723 "compare": true, 00:15:12.723 "compare_and_write": true, 00:15:12.723 "abort": true, 00:15:12.723 "seek_hole": false, 00:15:12.723 "seek_data": false, 00:15:12.723 "copy": true, 00:15:12.723 "nvme_iov_md": false 00:15:12.723 }, 00:15:12.723 "memory_domains": [ 00:15:12.723 { 00:15:12.723 "dma_device_id": "system", 00:15:12.723 "dma_device_type": 1 00:15:12.723 } 00:15:12.723 ], 00:15:12.723 "driver_specific": { 00:15:12.723 "nvme": [ 00:15:12.723 { 00:15:12.723 "trid": { 00:15:12.723 "trtype": "TCP", 00:15:12.723 "adrfam": "IPv4", 00:15:12.723 "traddr": "10.0.0.2", 00:15:12.723 "trsvcid": "4420", 00:15:12.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:12.723 }, 00:15:12.723 "ctrlr_data": { 00:15:12.723 "cntlid": 1, 00:15:12.723 "vendor_id": "0x8086", 00:15:12.723 "model_number": "SPDK bdev Controller", 00:15:12.723 "serial_number": "SPDK0", 00:15:12.723 "firmware_revision": "24.09", 00:15:12.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:12.723 "oacs": { 00:15:12.723 "security": 0, 00:15:12.723 "format": 0, 00:15:12.723 "firmware": 0, 00:15:12.723 "ns_manage": 0 00:15:12.723 }, 00:15:12.723 "multi_ctrlr": true, 00:15:12.723 "ana_reporting": false 00:15:12.723 }, 00:15:12.723 "vs": { 00:15:12.723 "nvme_version": "1.3" 00:15:12.723 }, 00:15:12.723 "ns_data": { 00:15:12.723 "id": 1, 00:15:12.723 "can_share": true 00:15:12.723 } 00:15:12.723 } 00:15:12.723 ], 00:15:12.723 "mp_policy": "active_passive" 00:15:12.723 } 00:15:12.723 } 00:15:12.723 ] 00:15:12.723 00:41:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2990437 00:15:12.723 00:41:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:12.723 00:41:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:12.723 Running I/O for 10 seconds... 00:15:14.100 Latency(us) 00:15:14.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.100 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:15:14.100 =================================================================================================================== 00:15:14.100 Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:15:14.100 00:15:14.667 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:14.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.925 Nvme0n1 : 2.00 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:15:14.925 =================================================================================================================== 00:15:14.925 Total : 15304.50 59.78 0.00 0.00 0.00 0.00 0.00 00:15:14.925 00:15:14.925 true 00:15:14.925 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:14.925 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:15.203 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:15.203 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:15.203 00:41:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2990437 00:15:15.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.805 Nvme0n1 : 3.00 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:15:15.805 =================================================================================================================== 00:15:15.805 Total : 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:15:15.805 00:15:16.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.741 Nvme0n1 : 4.00 15383.50 60.09 0.00 0.00 0.00 0.00 0.00 00:15:16.741 =================================================================================================================== 00:15:16.741 Total : 15383.50 60.09 0.00 0.00 0.00 0.00 0.00 00:15:16.741 00:15:18.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.120 Nvme0n1 : 5.00 15405.60 60.18 0.00 0.00 0.00 0.00 0.00 00:15:18.120 =================================================================================================================== 00:15:18.120 Total : 15405.60 60.18 0.00 0.00 0.00 0.00 0.00 00:15:18.120 00:15:19.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.055 Nvme0n1 : 6.00 15420.33 60.24 0.00 0.00 0.00 0.00 0.00 00:15:19.055 =================================================================================================================== 00:15:19.055 Total : 15420.33 60.24 0.00 0.00 0.00 0.00 0.00 00:15:19.055 00:15:19.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.990 Nvme0n1 : 7.00 15431.29 60.28 0.00 0.00 0.00 0.00 0.00 00:15:19.990 =================================================================================================================== 00:15:19.990 Total : 15431.29 60.28 0.00 0.00 0.00 0.00 0.00 00:15:19.990 00:15:20.926 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.926 Nvme0n1 : 8.00 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:15:20.926 =================================================================================================================== 00:15:20.926 Total : 15439.50 60.31 0.00 0.00 0.00 0.00 0.00 00:15:20.926 00:15:21.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.861 Nvme0n1 : 9.00 15453.33 60.36 0.00 0.00 0.00 0.00 0.00 00:15:21.861 =================================================================================================================== 00:15:21.861 Total : 15453.33 60.36 0.00 0.00 0.00 0.00 0.00 00:15:21.861 00:15:22.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.798 Nvme0n1 : 10.00 15457.40 60.38 0.00 0.00 0.00 0.00 0.00 00:15:22.798 =================================================================================================================== 00:15:22.798 Total : 15457.40 60.38 0.00 0.00 0.00 0.00 0.00 00:15:22.798 00:15:22.798 00:15:22.798 Latency(us) 00:15:22.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.798 Nvme0n1 : 10.01 15458.46 60.38 0.00 0.00 8274.27 5213.09 17515.99 00:15:22.798 =================================================================================================================== 00:15:22.798 Total : 15458.46 60.38 0.00 0.00 8274.27 5213.09 17515.99 00:15:22.798 0 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2990164 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2990164 ']' 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2990164 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2990164 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2990164' 00:15:22.798 killing process with pid 2990164 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2990164 00:15:22.798 Received shutdown signal, test time was about 10.000000 seconds 00:15:22.798 00:15:22.798 Latency(us) 00:15:22.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.798 =================================================================================================================== 00:15:22.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:22.798 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2990164 00:15:23.058 00:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.316 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:23.575 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:23.575 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:23.834 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:23.834 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:23.834 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2986142 00:15:23.834 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2986142 00:15:24.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2986142 Killed "${NVMF_APP[@]}" "$@" 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2992535 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2992535 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2992535 ']' 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.093 00:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:24.093 [2024-07-16 00:41:41.738626] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:24.093 [2024-07-16 00:41:41.738687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.093 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.093 [2024-07-16 00:41:41.827747] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.093 [2024-07-16 00:41:41.916398] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.093 [2024-07-16 00:41:41.916438] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.093 [2024-07-16 00:41:41.916448] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.093 [2024-07-16 00:41:41.916457] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.093 [2024-07-16 00:41:41.916465] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.093 [2024-07-16 00:41:41.916486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.031 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:25.291 [2024-07-16 00:41:42.948038] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:25.291 [2024-07-16 00:41:42.948143] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:25.291 [2024-07-16 00:41:42.948180] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.291 00:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:25.562 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 -t 2000 00:15:25.821 [ 00:15:25.821 { 00:15:25.821 "name": "4bad4b51-1d94-4e15-9de9-a1e73780b5a0", 00:15:25.821 "aliases": [ 00:15:25.821 "lvs/lvol" 00:15:25.821 ], 00:15:25.821 "product_name": "Logical Volume", 00:15:25.821 "block_size": 4096, 00:15:25.821 "num_blocks": 38912, 00:15:25.821 "uuid": "4bad4b51-1d94-4e15-9de9-a1e73780b5a0", 00:15:25.821 "assigned_rate_limits": { 00:15:25.821 "rw_ios_per_sec": 0, 00:15:25.821 "rw_mbytes_per_sec": 0, 00:15:25.821 "r_mbytes_per_sec": 0, 00:15:25.821 "w_mbytes_per_sec": 0 00:15:25.821 }, 00:15:25.821 "claimed": false, 00:15:25.821 "zoned": false, 00:15:25.821 "supported_io_types": { 00:15:25.821 "read": true, 00:15:25.821 "write": true, 00:15:25.821 "unmap": true, 00:15:25.821 "flush": false, 00:15:25.821 "reset": true, 00:15:25.821 "nvme_admin": false, 00:15:25.821 "nvme_io": false, 00:15:25.821 "nvme_io_md": false, 00:15:25.821 "write_zeroes": true, 00:15:25.821 "zcopy": false, 00:15:25.821 "get_zone_info": false, 00:15:25.821 "zone_management": false, 00:15:25.821 "zone_append": false, 00:15:25.821 "compare": false, 00:15:25.821 "compare_and_write": false, 00:15:25.821 "abort": false, 00:15:25.821 "seek_hole": true, 00:15:25.821 "seek_data": true, 00:15:25.821 "copy": false, 00:15:25.821 "nvme_iov_md": false 00:15:25.821 }, 00:15:25.821 "driver_specific": { 00:15:25.821 "lvol": { 00:15:25.821 "lvol_store_uuid": "deace57b-1640-4de6-a012-a9d8c5b4a57b", 00:15:25.821 "base_bdev": "aio_bdev", 00:15:25.821 "thin_provision": false, 00:15:25.821 "num_allocated_clusters": 38, 00:15:25.821 "snapshot": false, 00:15:25.821 "clone": false, 00:15:25.821 "esnap_clone": false 00:15:25.821 } 00:15:25.821 } 00:15:25.821 } 00:15:25.821 ] 00:15:25.821 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:25.821 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:25.821 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:26.080 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:26.080 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:26.080 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:26.338 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:26.338 00:41:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:26.338 [2024-07-16 00:41:44.169036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.596 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:26.854 request: 00:15:26.854 { 00:15:26.854 "uuid": "deace57b-1640-4de6-a012-a9d8c5b4a57b", 00:15:26.854 "method": "bdev_lvol_get_lvstores", 00:15:26.854 "req_id": 1 00:15:26.854 } 00:15:26.854 Got JSON-RPC error response 00:15:26.854 response: 00:15:26.854 { 00:15:26.854 "code": -19, 00:15:26.854 "message": "No such device" 00:15:26.854 } 00:15:26.854 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:26.854 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.854 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.854 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.854 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.854 aio_bdev 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.113 00:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 -t 2000 00:15:27.372 [ 00:15:27.372 { 00:15:27.372 "name": "4bad4b51-1d94-4e15-9de9-a1e73780b5a0", 00:15:27.372 "aliases": [ 00:15:27.372 "lvs/lvol" 00:15:27.372 ], 00:15:27.372 "product_name": "Logical Volume", 00:15:27.372 "block_size": 4096, 00:15:27.372 "num_blocks": 38912, 00:15:27.372 "uuid": "4bad4b51-1d94-4e15-9de9-a1e73780b5a0", 00:15:27.372 "assigned_rate_limits": { 00:15:27.372 "rw_ios_per_sec": 0, 00:15:27.372 "rw_mbytes_per_sec": 0, 00:15:27.372 "r_mbytes_per_sec": 0, 00:15:27.372 "w_mbytes_per_sec": 0 00:15:27.372 }, 00:15:27.372 "claimed": false, 00:15:27.372 "zoned": false, 00:15:27.372 "supported_io_types": { 00:15:27.372 "read": true, 00:15:27.372 "write": true, 00:15:27.372 "unmap": true, 00:15:27.372 "flush": false, 00:15:27.372 "reset": true, 00:15:27.372 "nvme_admin": false, 00:15:27.372 "nvme_io": false, 00:15:27.372 "nvme_io_md": false, 00:15:27.372 "write_zeroes": true, 00:15:27.372 "zcopy": false, 00:15:27.372 "get_zone_info": false, 00:15:27.372 "zone_management": false, 00:15:27.372 "zone_append": false, 00:15:27.372 "compare": false, 00:15:27.372 "compare_and_write": false, 00:15:27.372 "abort": false, 00:15:27.372 "seek_hole": true, 00:15:27.372 "seek_data": true, 00:15:27.372 "copy": false, 00:15:27.372 "nvme_iov_md": false 00:15:27.372 }, 00:15:27.372 "driver_specific": { 00:15:27.372 "lvol": { 00:15:27.372 "lvol_store_uuid": "deace57b-1640-4de6-a012-a9d8c5b4a57b", 00:15:27.372 "base_bdev": "aio_bdev", 00:15:27.372 "thin_provision": false, 00:15:27.372 "num_allocated_clusters": 38, 00:15:27.372 "snapshot": false, 00:15:27.372 "clone": false, 00:15:27.372 "esnap_clone": false 00:15:27.372 } 00:15:27.372 } 00:15:27.372 } 00:15:27.372 ] 00:15:27.372 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:27.372 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:27.372 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:27.630 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:27.630 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:27.630 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:27.887 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:27.887 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4bad4b51-1d94-4e15-9de9-a1e73780b5a0 00:15:28.145 00:41:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u deace57b-1640-4de6-a012-a9d8c5b4a57b 00:15:28.402 00:41:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:28.660 00:41:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:28.661 00:15:28.661 real 0m20.102s 00:15:28.661 user 0m52.203s 00:15:28.661 sys 0m3.623s 00:15:28.661 00:41:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.661 00:41:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:28.661 ************************************ 00:15:28.661 END TEST lvs_grow_dirty 00:15:28.661 ************************************ 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:28.919 nvmf_trace.0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:28.919 rmmod nvme_tcp 00:15:28.919 rmmod nvme_fabrics 00:15:28.919 rmmod nvme_keyring 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2992535 ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2992535 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2992535 ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2992535 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2992535 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2992535' 00:15:28.919 killing process with pid 2992535 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2992535 00:15:28.919 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2992535 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.178 00:41:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.720 00:41:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.720 00:15:31.720 real 0m48.432s 00:15:31.720 user 1m17.896s 00:15:31.720 sys 0m10.281s 00:15:31.720 00:41:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.720 00:41:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:31.720 ************************************ 00:15:31.720 END TEST nvmf_lvs_grow 00:15:31.720 ************************************ 00:15:31.720 00:41:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.720 00:41:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:31.720 00:41:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.720 00:41:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.720 00:41:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.720 ************************************ 00:15:31.720 START TEST nvmf_bdev_io_wait 00:15:31.720 ************************************ 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:31.720 * Looking for test storage... 00:15:31.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.720 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.721 00:41:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.295 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:38.296 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:38.296 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:38.296 Found net devices under 0000:af:00.0: cvl_0_0 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:38.296 Found net devices under 0000:af:00.1: cvl_0_1 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.296 00:41:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:15:38.296 00:15:38.296 --- 10.0.0.2 ping statistics --- 00:15:38.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.296 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:15:38.296 00:15:38.296 --- 10.0.0.1 ping statistics --- 00:15:38.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.296 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2997106 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2997106 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2997106 ']' 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.296 00:41:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.296 [2024-07-16 00:41:55.271682] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:38.296 [2024-07-16 00:41:55.271791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.296 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.296 [2024-07-16 00:41:55.403005] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.296 [2024-07-16 00:41:55.495666] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.296 [2024-07-16 00:41:55.495709] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.296 [2024-07-16 00:41:55.495719] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.296 [2024-07-16 00:41:55.495728] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.296 [2024-07-16 00:41:55.495735] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.296 [2024-07-16 00:41:55.495787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.296 [2024-07-16 00:41:55.495829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.296 [2024-07-16 00:41:55.495939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.296 [2024-07-16 00:41:55.495939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 [2024-07-16 00:41:56.294879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 Malloc0 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.562 [2024-07-16 00:41:56.365159] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2997383 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2997385 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:38.562 { 00:15:38.562 "params": { 00:15:38.562 "name": "Nvme$subsystem", 00:15:38.562 "trtype": "$TEST_TRANSPORT", 00:15:38.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.562 "adrfam": "ipv4", 00:15:38.562 "trsvcid": "$NVMF_PORT", 00:15:38.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.562 "hdgst": ${hdgst:-false}, 00:15:38.562 "ddgst": ${ddgst:-false} 00:15:38.562 }, 00:15:38.562 "method": "bdev_nvme_attach_controller" 00:15:38.562 } 00:15:38.562 EOF 00:15:38.562 )") 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2997387 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:38.562 { 00:15:38.562 "params": { 00:15:38.562 "name": "Nvme$subsystem", 00:15:38.562 "trtype": "$TEST_TRANSPORT", 00:15:38.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.562 "adrfam": "ipv4", 00:15:38.562 "trsvcid": "$NVMF_PORT", 00:15:38.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.562 "hdgst": ${hdgst:-false}, 00:15:38.562 "ddgst": ${ddgst:-false} 00:15:38.562 }, 00:15:38.562 "method": "bdev_nvme_attach_controller" 00:15:38.562 } 00:15:38.562 EOF 00:15:38.562 )") 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2997390 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:38.562 { 00:15:38.562 "params": { 00:15:38.562 "name": "Nvme$subsystem", 00:15:38.562 "trtype": "$TEST_TRANSPORT", 00:15:38.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.562 "adrfam": "ipv4", 00:15:38.562 "trsvcid": "$NVMF_PORT", 00:15:38.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.562 "hdgst": ${hdgst:-false}, 00:15:38.562 "ddgst": ${ddgst:-false} 00:15:38.562 }, 00:15:38.562 "method": "bdev_nvme_attach_controller" 00:15:38.562 } 00:15:38.562 EOF 00:15:38.562 )") 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:38.562 { 00:15:38.562 "params": { 00:15:38.562 "name": "Nvme$subsystem", 00:15:38.562 "trtype": "$TEST_TRANSPORT", 00:15:38.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.562 "adrfam": "ipv4", 00:15:38.562 "trsvcid": "$NVMF_PORT", 00:15:38.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.562 "hdgst": ${hdgst:-false}, 00:15:38.562 "ddgst": ${ddgst:-false} 00:15:38.562 }, 00:15:38.562 "method": "bdev_nvme_attach_controller" 00:15:38.562 } 00:15:38.562 EOF 00:15:38.562 )") 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2997383 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:38.562 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:38.563 "params": { 00:15:38.563 "name": "Nvme1", 00:15:38.563 "trtype": "tcp", 00:15:38.563 "traddr": "10.0.0.2", 00:15:38.563 "adrfam": "ipv4", 00:15:38.563 "trsvcid": "4420", 00:15:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.563 "hdgst": false, 00:15:38.563 "ddgst": false 00:15:38.563 }, 00:15:38.563 "method": "bdev_nvme_attach_controller" 00:15:38.563 }' 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:38.563 "params": { 00:15:38.563 "name": "Nvme1", 00:15:38.563 "trtype": "tcp", 00:15:38.563 "traddr": "10.0.0.2", 00:15:38.563 "adrfam": "ipv4", 00:15:38.563 "trsvcid": "4420", 00:15:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.563 "hdgst": false, 00:15:38.563 "ddgst": false 00:15:38.563 }, 00:15:38.563 "method": "bdev_nvme_attach_controller" 00:15:38.563 }' 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:38.563 "params": { 00:15:38.563 "name": "Nvme1", 00:15:38.563 "trtype": "tcp", 00:15:38.563 "traddr": "10.0.0.2", 00:15:38.563 "adrfam": "ipv4", 00:15:38.563 "trsvcid": "4420", 00:15:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.563 "hdgst": false, 00:15:38.563 "ddgst": false 00:15:38.563 }, 00:15:38.563 "method": "bdev_nvme_attach_controller" 00:15:38.563 }' 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:38.563 00:41:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:38.563 "params": { 00:15:38.563 "name": "Nvme1", 00:15:38.563 "trtype": "tcp", 00:15:38.563 "traddr": "10.0.0.2", 00:15:38.563 "adrfam": "ipv4", 00:15:38.563 "trsvcid": "4420", 00:15:38.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.563 "hdgst": false, 00:15:38.563 "ddgst": false 00:15:38.563 }, 00:15:38.563 "method": "bdev_nvme_attach_controller" 00:15:38.563 }' 00:15:38.821 [2024-07-16 00:41:56.419385] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:38.821 [2024-07-16 00:41:56.419448] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:38.821 [2024-07-16 00:41:56.419594] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:38.821 [2024-07-16 00:41:56.419648] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:38.821 [2024-07-16 00:41:56.420840] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:38.821 [2024-07-16 00:41:56.420891] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:38.821 [2024-07-16 00:41:56.422108] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:38.821 [2024-07-16 00:41:56.422167] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:38.821 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.821 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.821 [2024-07-16 00:41:56.649137] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.079 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.079 [2024-07-16 00:41:56.709597] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.079 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.079 [2024-07-16 00:41:56.790680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.079 [2024-07-16 00:41:56.799454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:39.079 [2024-07-16 00:41:56.803972] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.079 [2024-07-16 00:41:56.865651] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.079 [2024-07-16 00:41:56.906327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:39.337 [2024-07-16 00:41:56.955137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:39.337 Running I/O for 1 seconds... 00:15:39.337 Running I/O for 1 seconds... 00:15:39.337 Running I/O for 1 seconds... 00:15:39.596 Running I/O for 1 seconds... 00:15:40.163 00:15:40.163 Latency(us) 00:15:40.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.163 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:40.163 Nvme1n1 : 1.01 7092.10 27.70 0.00 0.00 17939.59 7745.16 23473.80 00:15:40.163 =================================================================================================================== 00:15:40.163 Total : 7092.10 27.70 0.00 0.00 17939.59 7745.16 23473.80 00:15:40.422 00:15:40.422 Latency(us) 00:15:40.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.422 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:40.422 Nvme1n1 : 1.01 7117.31 27.80 0.00 0.00 17882.63 10604.92 27525.12 00:15:40.422 =================================================================================================================== 00:15:40.422 Total : 7117.31 27.80 0.00 0.00 17882.63 10604.92 27525.12 00:15:40.422 00:15:40.422 Latency(us) 00:15:40.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.422 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:40.422 Nvme1n1 : 1.01 4589.67 17.93 0.00 0.00 27719.12 5600.35 42419.67 00:15:40.422 =================================================================================================================== 00:15:40.422 Total : 4589.67 17.93 0.00 0.00 27719.12 5600.35 42419.67 00:15:40.422 00:15:40.422 Latency(us) 00:15:40.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.422 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:40.422 Nvme1n1 : 1.00 163655.68 639.28 0.00 0.00 778.89 310.92 908.57 00:15:40.422 =================================================================================================================== 00:15:40.422 Total : 163655.68 639.28 0.00 0.00 778.89 310.92 908.57 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2997385 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2997387 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2997390 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.681 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.681 rmmod nvme_tcp 00:15:40.681 rmmod nvme_fabrics 00:15:40.681 rmmod nvme_keyring 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2997106 ']' 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2997106 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2997106 ']' 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2997106 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2997106 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2997106' 00:15:40.940 killing process with pid 2997106 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2997106 00:15:40.940 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2997106 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.199 00:41:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.104 00:42:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.104 00:15:43.104 real 0m11.844s 00:15:43.104 user 0m20.662s 00:15:43.104 sys 0m6.399s 00:15:43.104 00:42:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.104 00:42:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:43.104 ************************************ 00:15:43.104 END TEST nvmf_bdev_io_wait 00:15:43.104 ************************************ 00:15:43.104 00:42:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.104 00:42:00 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.104 00:42:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.104 00:42:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.104 00:42:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.104 ************************************ 00:15:43.104 START TEST nvmf_queue_depth 00:15:43.104 ************************************ 00:15:43.104 00:42:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.363 * Looking for test storage... 00:15:43.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.363 00:42:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:49.926 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:49.926 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:49.926 Found net devices under 0000:af:00.0: cvl_0_0 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.926 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:49.927 Found net devices under 0000:af:00.1: cvl_0_1 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:15:49.927 00:15:49.927 --- 10.0.0.2 ping statistics --- 00:15:49.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.927 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:15:49.927 00:15:49.927 --- 10.0.0.1 ping statistics --- 00:15:49.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.927 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3001522 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3001522 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3001522 ']' 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.927 00:42:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 [2024-07-16 00:42:06.943299] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:49.927 [2024-07-16 00:42:06.943361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.927 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.927 [2024-07-16 00:42:07.051994] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.927 [2024-07-16 00:42:07.198761] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.927 [2024-07-16 00:42:07.198822] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.927 [2024-07-16 00:42:07.198845] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.927 [2024-07-16 00:42:07.198864] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.927 [2024-07-16 00:42:07.198881] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.927 [2024-07-16 00:42:07.198922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 [2024-07-16 00:42:07.371164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 Malloc0 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.927 [2024-07-16 00:42:07.433045] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3001555 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:49.927 00:42:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3001555 /var/tmp/bdevperf.sock 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3001555 ']' 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:49.928 00:42:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:49.928 [2024-07-16 00:42:07.496207] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:15:49.928 [2024-07-16 00:42:07.496267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3001555 ] 00:15:49.928 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.928 [2024-07-16 00:42:07.579746] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.928 [2024-07-16 00:42:07.669731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 NVMe0n1 00:15:50.866 00:42:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.867 00:42:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.126 Running I/O for 10 seconds... 00:16:01.184 00:16:01.184 Latency(us) 00:16:01.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.184 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:01.184 Verification LBA range: start 0x0 length 0x4000 00:16:01.184 NVMe0n1 : 10.11 6532.61 25.52 0.00 0.00 155711.81 29669.93 94371.84 00:16:01.184 =================================================================================================================== 00:16:01.184 Total : 6532.61 25.52 0.00 0.00 155711.81 29669.93 94371.84 00:16:01.184 0 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3001555 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3001555 ']' 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3001555 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3001555 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3001555' 00:16:01.184 killing process with pid 3001555 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3001555 00:16:01.184 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.184 00:16:01.184 Latency(us) 00:16:01.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.184 =================================================================================================================== 00:16:01.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.184 00:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3001555 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.444 rmmod nvme_tcp 00:16:01.444 rmmod nvme_fabrics 00:16:01.444 rmmod nvme_keyring 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3001522 ']' 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3001522 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3001522 ']' 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3001522 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3001522 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3001522' 00:16:01.444 killing process with pid 3001522 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3001522 00:16:01.444 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3001522 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.703 00:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.242 00:42:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.242 00:16:04.242 real 0m20.614s 00:16:04.242 user 0m25.607s 00:16:04.242 sys 0m5.757s 00:16:04.242 00:42:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.242 00:42:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:04.242 ************************************ 00:16:04.242 END TEST nvmf_queue_depth 00:16:04.242 ************************************ 00:16:04.242 00:42:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:04.242 00:42:21 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.242 00:42:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:04.242 00:42:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:04.242 00:42:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:04.242 ************************************ 00:16:04.242 START TEST nvmf_target_multipath 00:16:04.242 ************************************ 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:04.242 * Looking for test storage... 00:16:04.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.242 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.243 00:42:21 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.513 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:09.513 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:09.514 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:09.514 Found net devices under 0000:af:00.0: cvl_0_0 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:09.514 Found net devices under 0000:af:00.1: cvl_0_1 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.514 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:09.772 00:16:09.772 --- 10.0.0.2 ping statistics --- 00:16:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.772 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:16:09.772 00:16:09.772 --- 10.0.0.1 ping statistics --- 00:16:09.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.772 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:09.772 only one NIC for nvmf test 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.772 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.772 rmmod nvme_tcp 00:16:10.029 rmmod nvme_fabrics 00:16:10.029 rmmod nvme_keyring 00:16:10.029 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.029 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.030 00:42:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.934 00:16:11.934 real 0m8.130s 00:16:11.934 user 0m1.693s 00:16:11.934 sys 0m4.403s 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.934 00:42:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:11.934 ************************************ 00:16:11.934 END TEST nvmf_target_multipath 00:16:11.934 ************************************ 00:16:12.193 00:42:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.193 00:42:29 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.193 00:42:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.193 00:42:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.193 00:42:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.193 ************************************ 00:16:12.193 START TEST nvmf_zcopy 00:16:12.193 ************************************ 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:12.193 * Looking for test storage... 00:16:12.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.193 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.194 00:42:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:18.763 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:18.763 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.763 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:18.764 Found net devices under 0000:af:00.0: cvl_0_0 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:18.764 Found net devices under 0000:af:00.1: cvl_0_1 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:16:18.764 00:16:18.764 --- 10.0.0.2 ping statistics --- 00:16:18.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.764 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:16:18.764 00:16:18.764 --- 10.0.0.1 ping statistics --- 00:16:18.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.764 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3011217 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3011217 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3011217 ']' 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.764 00:42:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:18.764 [2024-07-16 00:42:35.888538] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:18.764 [2024-07-16 00:42:35.888592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.764 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.764 [2024-07-16 00:42:35.975507] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.764 [2024-07-16 00:42:36.077901] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.764 [2024-07-16 00:42:36.077954] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.764 [2024-07-16 00:42:36.077968] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.764 [2024-07-16 00:42:36.077980] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.764 [2024-07-16 00:42:36.077990] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.764 [2024-07-16 00:42:36.078018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.023 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.023 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:19.023 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.023 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.023 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 [2024-07-16 00:42:36.869598] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 [2024-07-16 00:42:36.889739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 malloc0 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:19.282 { 00:16:19.282 "params": { 00:16:19.282 "name": "Nvme$subsystem", 00:16:19.282 "trtype": "$TEST_TRANSPORT", 00:16:19.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.282 "adrfam": "ipv4", 00:16:19.282 "trsvcid": "$NVMF_PORT", 00:16:19.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.282 "hdgst": ${hdgst:-false}, 00:16:19.282 "ddgst": ${ddgst:-false} 00:16:19.282 }, 00:16:19.282 "method": "bdev_nvme_attach_controller" 00:16:19.282 } 00:16:19.282 EOF 00:16:19.282 )") 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:19.282 00:42:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.282 "params": { 00:16:19.282 "name": "Nvme1", 00:16:19.282 "trtype": "tcp", 00:16:19.282 "traddr": "10.0.0.2", 00:16:19.282 "adrfam": "ipv4", 00:16:19.282 "trsvcid": "4420", 00:16:19.282 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.282 "hdgst": false, 00:16:19.282 "ddgst": false 00:16:19.282 }, 00:16:19.282 "method": "bdev_nvme_attach_controller" 00:16:19.282 }' 00:16:19.282 [2024-07-16 00:42:36.982167] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:19.282 [2024-07-16 00:42:36.982233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011494 ] 00:16:19.282 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.282 [2024-07-16 00:42:37.066576] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.541 [2024-07-16 00:42:37.158888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.541 Running I/O for 10 seconds... 00:16:31.749 00:16:31.749 Latency(us) 00:16:31.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:31.749 Verification LBA range: start 0x0 length 0x1000 00:16:31.749 Nvme1n1 : 10.02 4470.31 34.92 0.00 0.00 28550.32 4021.53 37653.41 00:16:31.749 =================================================================================================================== 00:16:31.749 Total : 4470.31 34.92 0.00 0.00 28550.32 4021.53 37653.41 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3013326 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:31.749 { 00:16:31.749 "params": { 00:16:31.749 "name": "Nvme$subsystem", 00:16:31.749 "trtype": "$TEST_TRANSPORT", 00:16:31.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.749 "adrfam": "ipv4", 00:16:31.749 "trsvcid": "$NVMF_PORT", 00:16:31.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.749 "hdgst": ${hdgst:-false}, 00:16:31.749 "ddgst": ${ddgst:-false} 00:16:31.749 }, 00:16:31.749 "method": "bdev_nvme_attach_controller" 00:16:31.749 } 00:16:31.749 EOF 00:16:31.749 )") 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:31.749 [2024-07-16 00:42:47.579747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.579790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:31.749 00:42:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:31.749 "params": { 00:16:31.749 "name": "Nvme1", 00:16:31.749 "trtype": "tcp", 00:16:31.749 "traddr": "10.0.0.2", 00:16:31.749 "adrfam": "ipv4", 00:16:31.749 "trsvcid": "4420", 00:16:31.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.749 "hdgst": false, 00:16:31.749 "ddgst": false 00:16:31.749 }, 00:16:31.749 "method": "bdev_nvme_attach_controller" 00:16:31.749 }' 00:16:31.749 [2024-07-16 00:42:47.591753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.591775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.603792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.603810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.615825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.615842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.621176] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:31.749 [2024-07-16 00:42:47.621231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013326 ] 00:16:31.749 [2024-07-16 00:42:47.627860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.627878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.639893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.639910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.651926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.651946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.749 [2024-07-16 00:42:47.663963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.663981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.676000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.676017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.688037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.688054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.700073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.700091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.702338] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.749 [2024-07-16 00:42:47.712105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.712124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.724140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.724157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.736173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.736190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.748217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.748244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.760249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.760277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.772284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.772302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.784323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.784340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.790522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.749 [2024-07-16 00:42:47.796359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.796377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.808402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.808428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.820431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.820453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.832460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.832479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.844497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.844514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.856537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.856556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.868565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.749 [2024-07-16 00:42:47.868583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.749 [2024-07-16 00:42:47.880601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.880618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.892647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.892675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.904681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.904705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.916717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.916742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.928758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.928785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.940789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.940812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.952824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.952851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 Running I/O for 5 seconds... 00:16:31.750 [2024-07-16 00:42:47.973752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.973784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:47.991238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:47.991278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.009651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.009681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.027632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.027662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.046771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.046800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.065877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.065906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.084356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.084385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.102455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.102484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.121501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.121530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.140566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.140596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.159882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.159912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.179269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.179299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.197240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.197278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.215654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.215684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.233924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.233953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.251889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.251919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.270999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.271030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.284694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.284722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.298574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.298603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.316105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.316134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.334616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.334647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.353608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.353638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.372794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.372823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.391033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.391062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.409953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.409982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.428971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.428999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.446179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.446208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.465621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.465649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.482320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.482348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.500304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.500332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.519074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.519103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.538425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.538454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.555434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.555463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.567705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.567734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.581663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.581692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.595928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.595957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.613198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.613227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.629910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.629938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.647797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.647826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.665128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.665156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.682905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.682934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.701630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.701658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.718542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.718571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.737263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.737292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.754338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.754367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.773055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.773084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.750 [2024-07-16 00:42:48.790978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.750 [2024-07-16 00:42:48.791006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.810226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.810262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.829468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.829497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.847617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.847645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.865334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.865362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.883413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.883442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.901666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.901697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.919819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.919848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.937387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.937415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.955559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.955588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.974676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.974705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:48.993989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:48.994019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.012300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.012330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.031578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.031607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.049788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.049816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.068601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.068629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.085536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.085565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.097928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.097956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.112781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.112808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.130296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.130324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.149425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.149454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.168778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.168807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.186840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.186869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.204596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.204625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.222619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.222647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.241587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.241615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.259661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.259689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.277680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.277709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.296241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.296277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.314248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.314285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.326647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.326675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.341137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.341171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.358823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.358851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.375550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.375580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.394823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.394854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.411754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.411784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.424620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.424648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.440057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.440086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.459446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.459477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.478027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.478058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.497368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.497397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.516608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.516636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.533564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.533593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.545642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.545671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.559911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.559939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.751 [2024-07-16 00:42:49.577403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.751 [2024-07-16 00:42:49.577432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.594129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.594159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.612532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.612562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.631672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.631701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.651060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.651089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.667592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.667627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.686627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.686657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.705972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.706003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.725246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.725286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.743567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.743598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.762558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.762589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.780572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.780600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.799749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.799779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.818330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.818360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.010 [2024-07-16 00:42:49.837211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.010 [2024-07-16 00:42:49.837240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.268 [2024-07-16 00:42:49.855434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.268 [2024-07-16 00:42:49.855463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.268 [2024-07-16 00:42:49.874694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.268 [2024-07-16 00:42:49.874722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.268 [2024-07-16 00:42:49.892751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.268 [2024-07-16 00:42:49.892780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.268 [2024-07-16 00:42:49.910748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.268 [2024-07-16 00:42:49.910777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.268 [2024-07-16 00:42:49.929717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.268 [2024-07-16 00:42:49.929747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:49.947955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:49.947985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:49.965679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:49.965708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:49.984673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:49.984702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.001425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.001456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.020328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.020362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.039159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.039191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.057870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.057900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.075972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.076001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.269 [2024-07-16 00:42:50.094689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.269 [2024-07-16 00:42:50.094718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.112752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.112780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.132195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.132224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.149124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.149154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.167155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.167183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.185155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.185184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.204193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.204223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.527 [2024-07-16 00:42:50.221074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.527 [2024-07-16 00:42:50.221103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.239182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.239211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.256985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.257014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.275876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.275905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.293144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.293172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.312467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.312496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.330652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.330681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.528 [2024-07-16 00:42:50.349486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.528 [2024-07-16 00:42:50.349514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.367447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.367482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.385203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.385232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.404303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.404332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.422476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.422505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.441581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.441610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.459494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.459523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.477896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.477925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.497171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.497200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.514148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.514176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.526629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.526658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.540233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.540269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.554592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.554621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.572395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.572424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.590808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.590838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.786 [2024-07-16 00:42:50.609139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.786 [2024-07-16 00:42:50.609168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.627978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.628006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.645886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.645914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.664805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.664834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.683527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.683556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.701927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.701958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.720917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.720948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.739230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.739266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.757426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.757455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.776409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.776447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.794406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.794434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.813061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.813090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.831522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.831551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.850950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.850980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.045 [2024-07-16 00:42:50.869899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.045 [2024-07-16 00:42:50.869929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.889281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.889312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.907511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.907541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.926347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.926377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.943194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.943223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.962514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.962543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.979405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.979435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:50.997603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:50.997633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.015586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.015615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.033707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.033736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.053239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.053276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.071333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.071362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.090377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.090407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.108507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.108538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.304 [2024-07-16 00:42:51.127965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.304 [2024-07-16 00:42:51.127994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.146292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.146321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.164263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.164292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.182225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.182262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.201441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.201471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.219084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.219113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.237555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.237584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.257038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.257068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.276529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.276560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.295309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.295339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.313728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.313757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.333148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.333177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.350343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.350371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.369360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.369389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.564 [2024-07-16 00:42:51.387492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.564 [2024-07-16 00:42:51.387521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.406357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.406387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.424626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.424655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.443028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.443057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.461057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.461085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.480438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.480467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.498494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.498524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.518052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.518082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.536326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.536355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.555577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.555606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.573671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.573700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.591990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.592019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.610080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.610109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.629272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.629300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.823 [2024-07-16 00:42:51.647227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.823 [2024-07-16 00:42:51.647263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.665341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.665370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.684057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.684085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.702899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.702928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.721350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.721382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.740321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.740351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.759742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.759771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.777740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.777769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.796655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.796685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.814419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.814448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.832959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.832988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.849851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.849880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.867591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.867619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.886657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.886686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.083 [2024-07-16 00:42:51.904953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.083 [2024-07-16 00:42:51.904984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:51.922676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:51.922705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:51.941850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:51.941880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:51.959893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:51.959922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:51.977776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:51.977805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:51.996827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:51.996857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.014666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.014695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.032843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.032872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.049709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.049738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.067568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.067597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.085184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.085218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.103380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.103409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.121593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.121621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.139897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.139926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.159230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.159266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.342 [2024-07-16 00:42:52.177379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.342 [2024-07-16 00:42:52.177407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.195057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.195086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.214205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.214234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.233331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.233361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.251693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.251722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.270693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.270721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.287621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.287650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.300263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.300291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.315230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.315268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.332189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.332218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.350336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.350366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.368290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.368320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.386809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.386839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.404848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.404877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.602 [2024-07-16 00:42:52.424029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.602 [2024-07-16 00:42:52.424064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.442228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.442269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.460037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.460067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.476783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.476812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.496021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.496051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.513020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.513048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.525895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.525924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.541285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.541314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.560228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.560266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.578724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.578754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.597831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.597862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.614583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.614611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.633933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.633962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.654090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.654119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.672681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.672710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.861 [2024-07-16 00:42:52.689529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.861 [2024-07-16 00:42:52.689560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.708562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.708592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.725352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.725380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.742035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.742067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.760402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.760437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.780078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.780108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.797911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.797940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.817067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.817095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.834971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.834999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.852978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.853006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.872323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.872351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.890427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.890455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.909546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.909574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.927403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.927432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.121 [2024-07-16 00:42:52.946728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.121 [2024-07-16 00:42:52.946755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:52.964910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:52.964941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 00:16:35.380 Latency(us) 00:16:35.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.380 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:35.380 Nvme1n1 : 5.01 8773.18 68.54 0.00 0.00 14571.92 6166.34 28359.21 00:16:35.380 =================================================================================================================== 00:16:35.380 Total : 8773.18 68.54 0.00 0.00 14571.92 6166.34 28359.21 00:16:35.380 [2024-07-16 00:42:52.980627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:52.980655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:52.989766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:52.989792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.001795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.001814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.013837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.013861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.025859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.025885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.037896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.037917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.049931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.049951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.061971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.061994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.380 [2024-07-16 00:42:53.073999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.380 [2024-07-16 00:42:53.074019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.086039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.086059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.098072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.098089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.110104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.110121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.122145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.122166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.134178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.134195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.146214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.146233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.158250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.158276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.170282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.170299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 [2024-07-16 00:42:53.182323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.381 [2024-07-16 00:42:53.182340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3013326) - No such process 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3013326 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.381 delay0 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.381 00:42:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:35.643 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.643 [2024-07-16 00:42:53.370446] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:42.210 Initializing NVMe Controllers 00:16:42.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:42.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:42.210 Initialization complete. Launching workers. 00:16:42.210 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 127 00:16:42.210 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 44 00:16:42.210 success 234, unsuccess 169, failed 0 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.210 rmmod nvme_tcp 00:16:42.210 rmmod nvme_fabrics 00:16:42.210 rmmod nvme_keyring 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3011217 ']' 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3011217 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3011217 ']' 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3011217 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3011217 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3011217' 00:16:42.210 killing process with pid 3011217 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3011217 00:16:42.210 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3011217 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.211 00:42:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.138 00:43:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:44.398 00:16:44.398 real 0m32.157s 00:16:44.398 user 0m43.808s 00:16:44.398 sys 0m10.033s 00:16:44.398 00:43:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.398 00:43:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:44.398 ************************************ 00:16:44.398 END TEST nvmf_zcopy 00:16:44.398 ************************************ 00:16:44.398 00:43:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.398 00:43:02 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.398 00:43:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.398 00:43:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.398 00:43:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.398 ************************************ 00:16:44.398 START TEST nvmf_nmic 00:16:44.398 ************************************ 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:44.398 * Looking for test storage... 00:16:44.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.398 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.399 00:43:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:50.967 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:50.967 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.967 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:50.968 Found net devices under 0000:af:00.0: cvl_0_0 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:50.968 Found net devices under 0000:af:00.1: cvl_0_1 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:50.968 00:16:50.968 --- 10.0.0.2 ping statistics --- 00:16:50.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.968 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:50.968 00:43:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:16:50.968 00:16:50.968 --- 10.0.0.1 ping statistics --- 00:16:50.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.968 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3019137 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3019137 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3019137 ']' 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.968 00:43:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:50.968 [2024-07-16 00:43:08.103695] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:16:50.968 [2024-07-16 00:43:08.103756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.968 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.968 [2024-07-16 00:43:08.191266] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.968 [2024-07-16 00:43:08.284337] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.968 [2024-07-16 00:43:08.284378] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.968 [2024-07-16 00:43:08.284389] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.968 [2024-07-16 00:43:08.284398] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.968 [2024-07-16 00:43:08.284406] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.968 [2024-07-16 00:43:08.284463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.968 [2024-07-16 00:43:08.284574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.968 [2024-07-16 00:43:08.284706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.968 [2024-07-16 00:43:08.284706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.227 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.227 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:51.227 00:43:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.227 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.227 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 [2024-07-16 00:43:09.098059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 Malloc0 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 [2024-07-16 00:43:09.157877] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:51.487 test case1: single bdev can't be used in multiple subsystems 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 [2024-07-16 00:43:09.181823] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:51.487 [2024-07-16 00:43:09.181848] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:51.487 [2024-07-16 00:43:09.181858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:51.487 request: 00:16:51.487 { 00:16:51.487 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.487 "namespace": { 00:16:51.487 "bdev_name": "Malloc0", 00:16:51.487 "no_auto_visible": false 00:16:51.487 }, 00:16:51.487 "method": "nvmf_subsystem_add_ns", 00:16:51.487 "req_id": 1 00:16:51.487 } 00:16:51.487 Got JSON-RPC error response 00:16:51.487 response: 00:16:51.487 { 00:16:51.487 "code": -32602, 00:16:51.487 "message": "Invalid parameters" 00:16:51.487 } 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:51.487 Adding namespace failed - expected result. 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:51.487 test case2: host connect to nvmf target in multiple paths 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 [2024-07-16 00:43:09.193991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.487 00:43:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.865 00:43:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:54.242 00:43:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.242 00:43:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:54.242 00:43:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.242 00:43:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:54.242 00:43:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:56.256 00:43:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:56.256 [global] 00:16:56.256 thread=1 00:16:56.256 invalidate=1 00:16:56.256 rw=write 00:16:56.256 time_based=1 00:16:56.256 runtime=1 00:16:56.256 ioengine=libaio 00:16:56.256 direct=1 00:16:56.256 bs=4096 00:16:56.256 iodepth=1 00:16:56.256 norandommap=0 00:16:56.256 numjobs=1 00:16:56.256 00:16:56.256 verify_dump=1 00:16:56.256 verify_backlog=512 00:16:56.256 verify_state_save=0 00:16:56.256 do_verify=1 00:16:56.256 verify=crc32c-intel 00:16:56.256 [job0] 00:16:56.256 filename=/dev/nvme0n1 00:16:56.256 Could not set queue depth (nvme0n1) 00:16:56.518 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.518 fio-3.35 00:16:56.518 Starting 1 thread 00:16:57.893 00:16:57.893 job0: (groupid=0, jobs=1): err= 0: pid=3020362: Tue Jul 16 00:43:15 2024 00:16:57.893 read: IOPS=1089, BW=4360KiB/s (4464kB/s)(4364KiB/1001msec) 00:16:57.893 slat (nsec): min=6379, max=29885, avg=7221.14, stdev=1620.94 00:16:57.893 clat (usec): min=341, max=687, avg=433.98, stdev=21.23 00:16:57.893 lat (usec): min=348, max=695, avg=441.20, stdev=21.21 00:16:57.893 clat percentiles (usec): 00:16:57.893 | 1.00th=[ 383], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 424], 00:16:57.893 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 433], 60.00th=[ 437], 00:16:57.893 | 70.00th=[ 441], 80.00th=[ 441], 90.00th=[ 445], 95.00th=[ 449], 00:16:57.893 | 99.00th=[ 461], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 685], 00:16:57.893 | 99.99th=[ 685] 00:16:57.893 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:57.893 slat (nsec): min=9042, max=39310, avg=10136.18, stdev=1593.91 00:16:57.893 clat (usec): min=250, max=583, avg=324.20, stdev=12.00 00:16:57.893 lat (usec): min=259, max=622, avg=334.33, stdev=12.34 00:16:57.893 clat percentiles (usec): 00:16:57.893 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:16:57.893 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 326], 00:16:57.893 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 338], 95.00th=[ 338], 00:16:57.893 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 420], 99.95th=[ 586], 00:16:57.893 | 99.99th=[ 586] 00:16:57.893 bw ( KiB/s): min= 6496, max= 6496, per=100.00%, avg=6496.00, stdev= 0.00, samples=1 00:16:57.893 iops : min= 1624, max= 1624, avg=1624.00, stdev= 0.00, samples=1 00:16:57.893 lat (usec) : 500=99.73%, 750=0.27% 00:16:57.893 cpu : usr=0.70%, sys=3.00%, ctx=2627, majf=0, minf=2 00:16:57.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.893 issued rwts: total=1091,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.893 00:16:57.893 Run status group 0 (all jobs): 00:16:57.893 READ: bw=4360KiB/s (4464kB/s), 4360KiB/s-4360KiB/s (4464kB/s-4464kB/s), io=4364KiB (4469kB), run=1001-1001msec 00:16:57.893 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:16:57.893 00:16:57.893 Disk stats (read/write): 00:16:57.893 nvme0n1: ios=1074/1322, merge=0/0, ticks=473/424, in_queue=897, util=92.79% 00:16:57.893 00:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:57.893 00:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.893 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:57.893 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:57.893 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.894 rmmod nvme_tcp 00:16:57.894 rmmod nvme_fabrics 00:16:57.894 rmmod nvme_keyring 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3019137 ']' 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3019137 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3019137 ']' 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3019137 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3019137 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3019137' 00:16:57.894 killing process with pid 3019137 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3019137 00:16:57.894 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3019137 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.153 00:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.692 00:43:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.692 00:17:00.692 real 0m15.914s 00:17:00.692 user 0m43.898s 00:17:00.692 sys 0m5.404s 00:17:00.692 00:43:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.692 00:43:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 ************************************ 00:17:00.692 END TEST nvmf_nmic 00:17:00.692 ************************************ 00:17:00.692 00:43:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.692 00:43:18 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:00.692 00:43:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.692 00:43:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.692 00:43:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 ************************************ 00:17:00.692 START TEST nvmf_fio_target 00:17:00.692 ************************************ 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:00.692 * Looking for test storage... 00:17:00.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.692 00:43:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:05.965 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:05.965 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.965 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:05.966 Found net devices under 0000:af:00.0: cvl_0_0 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:05.966 Found net devices under 0000:af:00.1: cvl_0_1 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.966 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:17:06.225 00:17:06.225 --- 10.0.0.2 ping statistics --- 00:17:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.225 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:17:06.225 00:17:06.225 --- 10.0.0.1 ping statistics --- 00:17:06.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.225 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.225 00:43:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3024194 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3024194 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3024194 ']' 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.225 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.483 [2024-07-16 00:43:24.073157] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:06.483 [2024-07-16 00:43:24.073217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.483 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.483 [2024-07-16 00:43:24.162090] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.483 [2024-07-16 00:43:24.250509] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.483 [2024-07-16 00:43:24.250556] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.483 [2024-07-16 00:43:24.250566] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.483 [2024-07-16 00:43:24.250575] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.483 [2024-07-16 00:43:24.250582] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.483 [2024-07-16 00:43:24.250639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.483 [2024-07-16 00:43:24.250751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.483 [2024-07-16 00:43:24.250840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.483 [2024-07-16 00:43:24.250840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.418 00:43:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:07.418 [2024-07-16 00:43:25.203636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.418 00:43:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.985 00:43:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:07.985 00:43:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.985 00:43:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:07.985 00:43:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.243 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:08.243 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:08.501 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:08.501 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:08.760 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.018 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:09.018 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.277 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:09.277 00:43:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:09.277 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:09.277 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:09.535 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:09.797 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:09.797 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.058 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:10.058 00:43:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:10.315 00:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.573 [2024-07-16 00:43:28.263180] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.574 00:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:10.831 00:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:11.089 00:43:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:12.466 00:43:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:14.368 00:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:14.368 [global] 00:17:14.368 thread=1 00:17:14.368 invalidate=1 00:17:14.368 rw=write 00:17:14.368 time_based=1 00:17:14.368 runtime=1 00:17:14.368 ioengine=libaio 00:17:14.368 direct=1 00:17:14.368 bs=4096 00:17:14.368 iodepth=1 00:17:14.368 norandommap=0 00:17:14.368 numjobs=1 00:17:14.368 00:17:14.368 verify_dump=1 00:17:14.368 verify_backlog=512 00:17:14.368 verify_state_save=0 00:17:14.368 do_verify=1 00:17:14.368 verify=crc32c-intel 00:17:14.368 [job0] 00:17:14.368 filename=/dev/nvme0n1 00:17:14.368 [job1] 00:17:14.368 filename=/dev/nvme0n2 00:17:14.671 [job2] 00:17:14.671 filename=/dev/nvme0n3 00:17:14.671 [job3] 00:17:14.671 filename=/dev/nvme0n4 00:17:14.671 Could not set queue depth (nvme0n1) 00:17:14.671 Could not set queue depth (nvme0n2) 00:17:14.671 Could not set queue depth (nvme0n3) 00:17:14.671 Could not set queue depth (nvme0n4) 00:17:14.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.940 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.940 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.940 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:14.940 fio-3.35 00:17:14.940 Starting 4 threads 00:17:16.343 00:17:16.343 job0: (groupid=0, jobs=1): err= 0: pid=3025894: Tue Jul 16 00:43:33 2024 00:17:16.343 read: IOPS=21, BW=85.0KiB/s (87.1kB/s)(88.0KiB/1035msec) 00:17:16.343 slat (nsec): min=10761, max=23521, avg=21663.55, stdev=2586.01 00:17:16.343 clat (usec): min=40773, max=41111, avg=40971.06, stdev=90.05 00:17:16.343 lat (usec): min=40795, max=41135, avg=40992.72, stdev=89.39 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:17:16.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:16.343 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:16.343 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:16.343 | 99.99th=[41157] 00:17:16.343 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:17:16.343 slat (nsec): min=10091, max=39356, avg=11735.37, stdev=2367.24 00:17:16.343 clat (usec): min=189, max=422, avg=244.70, stdev=22.52 00:17:16.343 lat (usec): min=200, max=461, avg=256.43, stdev=23.14 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 227], 00:17:16.343 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:17:16.343 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:17:16.343 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 424], 99.95th=[ 424], 00:17:16.343 | 99.99th=[ 424] 00:17:16.343 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.343 lat (usec) : 250=61.24%, 500=34.64% 00:17:16.343 lat (msec) : 50=4.12% 00:17:16.343 cpu : usr=0.39%, sys=0.87%, ctx=534, majf=0, minf=2 00:17:16.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.343 job1: (groupid=0, jobs=1): err= 0: pid=3025895: Tue Jul 16 00:43:33 2024 00:17:16.343 read: IOPS=1066, BW=4268KiB/s (4370kB/s)(4272KiB/1001msec) 00:17:16.343 slat (nsec): min=6451, max=31372, avg=7283.59, stdev=1333.44 00:17:16.343 clat (usec): min=387, max=940, avg=458.71, stdev=30.22 00:17:16.343 lat (usec): min=395, max=950, avg=465.99, stdev=30.26 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:17:16.343 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:17:16.343 | 70.00th=[ 474], 80.00th=[ 478], 90.00th=[ 486], 95.00th=[ 498], 00:17:16.343 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 668], 99.95th=[ 938], 00:17:16.343 | 99.99th=[ 938] 00:17:16.343 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:16.343 slat (nsec): min=9578, max=69686, avg=10659.20, stdev=2026.97 00:17:16.343 clat (usec): min=244, max=513, avg=312.88, stdev=43.74 00:17:16.343 lat (usec): min=254, max=525, avg=323.54, stdev=43.95 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 273], 20.00th=[ 285], 00:17:16.343 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:17:16.343 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 367], 95.00th=[ 416], 00:17:16.343 | 99.00th=[ 494], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 515], 00:17:16.343 | 99.99th=[ 515] 00:17:16.343 bw ( KiB/s): min= 5968, max= 5968, per=50.27%, avg=5968.00, stdev= 0.00, samples=1 00:17:16.343 iops : min= 1492, max= 1492, avg=1492.00, stdev= 0.00, samples=1 00:17:16.343 lat (usec) : 250=0.46%, 500=97.85%, 750=1.65%, 1000=0.04% 00:17:16.343 cpu : usr=1.50%, sys=2.30%, ctx=2606, majf=0, minf=1 00:17:16.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 issued rwts: total=1068,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.343 job2: (groupid=0, jobs=1): err= 0: pid=3025896: Tue Jul 16 00:43:33 2024 00:17:16.343 read: IOPS=19, BW=77.4KiB/s (79.2kB/s)(80.0KiB/1034msec) 00:17:16.343 slat (nsec): min=10318, max=15314, avg=12541.60, stdev=1440.42 00:17:16.343 clat (usec): min=40905, max=42051, avg=41459.29, stdev=498.89 00:17:16.343 lat (usec): min=40917, max=42062, avg=41471.83, stdev=498.53 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:16.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:17:16.343 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.343 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.343 | 99.99th=[42206] 00:17:16.343 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:17:16.343 slat (nsec): min=9665, max=67309, avg=13259.69, stdev=6085.00 00:17:16.343 clat (usec): min=293, max=541, avg=383.21, stdev=44.70 00:17:16.343 lat (usec): min=305, max=556, avg=396.47, stdev=44.76 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 347], 00:17:16.343 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 392], 00:17:16.343 | 70.00th=[ 408], 80.00th=[ 424], 90.00th=[ 437], 95.00th=[ 465], 00:17:16.343 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 545], 99.95th=[ 545], 00:17:16.343 | 99.99th=[ 545] 00:17:16.343 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.343 lat (usec) : 500=95.11%, 750=1.13% 00:17:16.343 lat (msec) : 50=3.76% 00:17:16.343 cpu : usr=0.39%, sys=0.58%, ctx=533, majf=0, minf=1 00:17:16.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.343 job3: (groupid=0, jobs=1): err= 0: pid=3025900: Tue Jul 16 00:43:33 2024 00:17:16.343 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:17:16.343 slat (nsec): min=10666, max=23721, avg=22580.81, stdev=2742.62 00:17:16.343 clat (usec): min=40898, max=42020, avg=41517.33, stdev=459.63 00:17:16.343 lat (usec): min=40921, max=42043, avg=41539.91, stdev=459.82 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:16.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:16.343 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:16.343 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:16.343 | 99.99th=[42206] 00:17:16.343 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:16.343 slat (nsec): min=11219, max=38124, avg=12692.30, stdev=1957.32 00:17:16.343 clat (usec): min=226, max=509, avg=279.27, stdev=23.40 00:17:16.343 lat (usec): min=239, max=521, avg=291.96, stdev=23.66 00:17:16.343 clat percentiles (usec): 00:17:16.343 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:17:16.343 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:17:16.343 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:17:16.343 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 510], 99.95th=[ 510], 00:17:16.343 | 99.99th=[ 510] 00:17:16.343 bw ( KiB/s): min= 4096, max= 4096, per=34.50%, avg=4096.00, stdev= 0.00, samples=1 00:17:16.343 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:16.343 lat (usec) : 250=7.13%, 500=88.74%, 750=0.19% 00:17:16.343 lat (msec) : 50=3.94% 00:17:16.343 cpu : usr=0.39%, sys=0.49%, ctx=535, majf=0, minf=1 00:17:16.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:16.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.343 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:16.343 00:17:16.343 Run status group 0 (all jobs): 00:17:16.343 READ: bw=4371KiB/s (4476kB/s), 77.4KiB/s-4268KiB/s (79.2kB/s-4370kB/s), io=4524KiB (4633kB), run=1001-1035msec 00:17:16.343 WRITE: bw=11.6MiB/s (12.2MB/s), 1979KiB/s-6138KiB/s (2026kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1035msec 00:17:16.343 00:17:16.343 Disk stats (read/write): 00:17:16.343 nvme0n1: ios=66/512, merge=0/0, ticks=689/120, in_queue=809, util=85.27% 00:17:16.343 nvme0n2: ios=1049/1040, merge=0/0, ticks=1452/329, in_queue=1781, util=96.01% 00:17:16.343 nvme0n3: ios=15/512, merge=0/0, ticks=621/189, in_queue=810, util=88.40% 00:17:16.343 nvme0n4: ios=38/512, merge=0/0, ticks=1575/138, in_queue=1713, util=96.12% 00:17:16.343 00:43:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:16.343 [global] 00:17:16.343 thread=1 00:17:16.343 invalidate=1 00:17:16.343 rw=randwrite 00:17:16.343 time_based=1 00:17:16.343 runtime=1 00:17:16.343 ioengine=libaio 00:17:16.343 direct=1 00:17:16.344 bs=4096 00:17:16.344 iodepth=1 00:17:16.344 norandommap=0 00:17:16.344 numjobs=1 00:17:16.344 00:17:16.344 verify_dump=1 00:17:16.344 verify_backlog=512 00:17:16.344 verify_state_save=0 00:17:16.344 do_verify=1 00:17:16.344 verify=crc32c-intel 00:17:16.344 [job0] 00:17:16.344 filename=/dev/nvme0n1 00:17:16.344 [job1] 00:17:16.344 filename=/dev/nvme0n2 00:17:16.344 [job2] 00:17:16.344 filename=/dev/nvme0n3 00:17:16.344 [job3] 00:17:16.344 filename=/dev/nvme0n4 00:17:16.344 Could not set queue depth (nvme0n1) 00:17:16.344 Could not set queue depth (nvme0n2) 00:17:16.344 Could not set queue depth (nvme0n3) 00:17:16.344 Could not set queue depth (nvme0n4) 00:17:16.609 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.609 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.609 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.609 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:16.609 fio-3.35 00:17:16.609 Starting 4 threads 00:17:18.002 00:17:18.002 job0: (groupid=0, jobs=1): err= 0: pid=3026331: Tue Jul 16 00:43:35 2024 00:17:18.002 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:18.002 slat (nsec): min=6901, max=35203, avg=7875.89, stdev=1590.50 00:17:18.002 clat (usec): min=368, max=1715, avg=449.89, stdev=84.87 00:17:18.002 lat (usec): min=376, max=1722, avg=457.76, stdev=84.90 00:17:18.002 clat percentiles (usec): 00:17:18.002 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 408], 00:17:18.002 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:17:18.002 | 70.00th=[ 445], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 553], 00:17:18.003 | 99.00th=[ 603], 99.50th=[ 668], 99.90th=[ 1500], 99.95th=[ 1713], 00:17:18.003 | 99.99th=[ 1713] 00:17:18.003 write: IOPS=1472, BW=5890KiB/s (6031kB/s)(5896KiB/1001msec); 0 zone resets 00:17:18.003 slat (nsec): min=9104, max=66126, avg=11466.05, stdev=2590.07 00:17:18.003 clat (usec): min=252, max=849, avg=343.98, stdev=73.13 00:17:18.003 lat (usec): min=263, max=861, avg=355.44, stdev=73.55 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:17:18.003 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 343], 00:17:18.003 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 469], 95.00th=[ 519], 00:17:18.003 | 99.00th=[ 578], 99.50th=[ 693], 99.90th=[ 848], 99.95th=[ 848], 00:17:18.003 | 99.99th=[ 848] 00:17:18.003 bw ( KiB/s): min= 5784, max= 5784, per=48.66%, avg=5784.00, stdev= 0.00, samples=1 00:17:18.003 iops : min= 1446, max= 1446, avg=1446.00, stdev= 0.00, samples=1 00:17:18.003 lat (usec) : 500=87.35%, 750=12.41%, 1000=0.08% 00:17:18.003 lat (msec) : 2=0.16% 00:17:18.003 cpu : usr=2.50%, sys=3.60%, ctx=2499, majf=0, minf=1 00:17:18.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 issued rwts: total=1024,1474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.003 job1: (groupid=0, jobs=1): err= 0: pid=3026335: Tue Jul 16 00:43:35 2024 00:17:18.003 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:17:18.003 slat (nsec): min=8380, max=22804, avg=20868.70, stdev=3935.67 00:17:18.003 clat (usec): min=443, max=41996, avg=39534.81, stdev=9211.86 00:17:18.003 lat (usec): min=464, max=42018, avg=39555.68, stdev=9211.83 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 445], 5.00th=[ 445], 10.00th=[41157], 20.00th=[41157], 00:17:18.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:17:18.003 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:18.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:18.003 | 99.99th=[42206] 00:17:18.003 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:17:18.003 slat (nsec): min=8807, max=59413, avg=10705.05, stdev=2626.29 00:17:18.003 clat (usec): min=272, max=813, avg=399.19, stdev=92.04 00:17:18.003 lat (usec): min=283, max=823, avg=409.89, stdev=92.95 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 318], 00:17:18.003 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 379], 00:17:18.003 | 70.00th=[ 486], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 537], 00:17:18.003 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 816], 99.95th=[ 816], 00:17:18.003 | 99.99th=[ 816] 00:17:18.003 bw ( KiB/s): min= 4096, max= 4096, per=34.46%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.003 lat (usec) : 500=76.13%, 750=20.11%, 1000=0.19% 00:17:18.003 lat (msec) : 50=3.57% 00:17:18.003 cpu : usr=0.70%, sys=0.10%, ctx=533, majf=0, minf=2 00:17:18.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.003 job2: (groupid=0, jobs=1): err= 0: pid=3026349: Tue Jul 16 00:43:35 2024 00:17:18.003 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:17:18.003 slat (nsec): min=9303, max=23960, avg=18420.95, stdev=6574.86 00:17:18.003 clat (usec): min=40870, max=42094, avg=41531.93, stdev=518.91 00:17:18.003 lat (usec): min=40893, max=42103, avg=41550.35, stdev=517.64 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:18.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:18.003 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:18.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:18.003 | 99.99th=[42206] 00:17:18.003 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:17:18.003 slat (nsec): min=9453, max=87379, avg=11637.49, stdev=4821.29 00:17:18.003 clat (usec): min=225, max=846, avg=416.56, stdev=116.04 00:17:18.003 lat (usec): min=236, max=856, avg=428.20, stdev=116.61 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 243], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 285], 00:17:18.003 | 30.00th=[ 310], 40.00th=[ 355], 50.00th=[ 478], 60.00th=[ 494], 00:17:18.003 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 537], 95.00th=[ 545], 00:17:18.003 | 99.00th=[ 652], 99.50th=[ 775], 99.90th=[ 848], 99.95th=[ 848], 00:17:18.003 | 99.99th=[ 848] 00:17:18.003 bw ( KiB/s): min= 4096, max= 4096, per=34.46%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.003 lat (usec) : 250=3.39%, 500=58.38%, 750=33.90%, 1000=0.75% 00:17:18.003 lat (msec) : 50=3.58% 00:17:18.003 cpu : usr=0.50%, sys=0.40%, ctx=535, majf=0, minf=1 00:17:18.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.003 job3: (groupid=0, jobs=1): err= 0: pid=3026355: Tue Jul 16 00:43:35 2024 00:17:18.003 read: IOPS=380, BW=1520KiB/s (1557kB/s)(1540KiB/1013msec) 00:17:18.003 slat (nsec): min=6490, max=21348, avg=8133.11, stdev=1651.33 00:17:18.003 clat (usec): min=277, max=42277, avg=2266.35, stdev=8702.36 00:17:18.003 lat (usec): min=284, max=42285, avg=2274.48, stdev=8703.15 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:17:18.003 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:17:18.003 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 379], 95.00th=[ 562], 00:17:18.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:18.003 | 99.99th=[42206] 00:17:18.003 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:18.003 slat (nsec): min=9079, max=45821, avg=12411.73, stdev=4078.84 00:17:18.003 clat (usec): min=194, max=470, avg=250.96, stdev=23.08 00:17:18.003 lat (usec): min=207, max=506, avg=263.38, stdev=24.32 00:17:18.003 clat percentiles (usec): 00:17:18.003 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:17:18.003 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:17:18.003 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:17:18.003 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 469], 99.95th=[ 469], 00:17:18.003 | 99.99th=[ 469] 00:17:18.003 bw ( KiB/s): min= 4096, max= 4096, per=34.46%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.003 lat (usec) : 250=28.65%, 500=69.12%, 750=0.11%, 1000=0.11% 00:17:18.003 lat (msec) : 50=2.01% 00:17:18.003 cpu : usr=0.69%, sys=0.79%, ctx=897, majf=0, minf=1 00:17:18.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.003 issued rwts: total=385,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.004 00:17:18.004 Run status group 0 (all jobs): 00:17:18.004 READ: bw=5718KiB/s (5855kB/s), 75.2KiB/s-4092KiB/s (77.1kB/s-4190kB/s), io=5792KiB (5931kB), run=1001-1013msec 00:17:18.004 WRITE: bw=11.6MiB/s (12.2MB/s), 2022KiB/s-5890KiB/s (2070kB/s-6031kB/s), io=11.8MiB (12.3MB), run=1001-1013msec 00:17:18.004 00:17:18.004 Disk stats (read/write): 00:17:18.004 nvme0n1: ios=1074/1057, merge=0/0, ticks=481/349, in_queue=830, util=87.47% 00:17:18.004 nvme0n2: ios=43/512, merge=0/0, ticks=663/203, in_queue=866, util=87.70% 00:17:18.004 nvme0n3: ios=56/512, merge=0/0, ticks=1454/212, in_queue=1666, util=96.66% 00:17:18.004 nvme0n4: ios=381/512, merge=0/0, ticks=702/123, in_queue=825, util=89.70% 00:17:18.004 00:43:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:18.004 [global] 00:17:18.004 thread=1 00:17:18.004 invalidate=1 00:17:18.004 rw=write 00:17:18.004 time_based=1 00:17:18.004 runtime=1 00:17:18.004 ioengine=libaio 00:17:18.004 direct=1 00:17:18.004 bs=4096 00:17:18.004 iodepth=128 00:17:18.004 norandommap=0 00:17:18.004 numjobs=1 00:17:18.004 00:17:18.004 verify_dump=1 00:17:18.004 verify_backlog=512 00:17:18.004 verify_state_save=0 00:17:18.004 do_verify=1 00:17:18.004 verify=crc32c-intel 00:17:18.004 [job0] 00:17:18.004 filename=/dev/nvme0n1 00:17:18.004 [job1] 00:17:18.004 filename=/dev/nvme0n2 00:17:18.004 [job2] 00:17:18.004 filename=/dev/nvme0n3 00:17:18.004 [job3] 00:17:18.004 filename=/dev/nvme0n4 00:17:18.004 Could not set queue depth (nvme0n1) 00:17:18.004 Could not set queue depth (nvme0n2) 00:17:18.004 Could not set queue depth (nvme0n3) 00:17:18.004 Could not set queue depth (nvme0n4) 00:17:18.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.262 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.262 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.262 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.262 fio-3.35 00:17:18.262 Starting 4 threads 00:17:19.644 00:17:19.644 job0: (groupid=0, jobs=1): err= 0: pid=3026770: Tue Jul 16 00:43:37 2024 00:17:19.644 read: IOPS=2136, BW=8546KiB/s (8751kB/s)(8700KiB/1018msec) 00:17:19.644 slat (nsec): min=1933, max=25195k, avg=214875.54, stdev=1608860.85 00:17:19.644 clat (usec): min=11411, max=51110, avg=27761.03, stdev=6956.83 00:17:19.644 lat (usec): min=11417, max=51138, avg=27975.91, stdev=7075.56 00:17:19.644 clat percentiles (usec): 00:17:19.644 | 1.00th=[12780], 5.00th=[18482], 10.00th=[21365], 20.00th=[23462], 00:17:19.644 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:17:19.644 | 70.00th=[28443], 80.00th=[33817], 90.00th=[38536], 95.00th=[43254], 00:17:19.644 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:17:19.644 | 99.99th=[51119] 00:17:19.644 write: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec); 0 zone resets 00:17:19.644 slat (usec): min=3, max=48423, avg=198.44, stdev=1532.79 00:17:19.644 clat (usec): min=1533, max=50539, avg=24441.77, stdev=3947.46 00:17:19.644 lat (usec): min=1553, max=50546, avg=24640.20, stdev=4116.08 00:17:19.644 clat percentiles (usec): 00:17:19.644 | 1.00th=[ 9896], 5.00th=[16057], 10.00th=[19792], 20.00th=[23200], 00:17:19.644 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:17:19.644 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27395], 95.00th=[27657], 00:17:19.644 | 99.00th=[28705], 99.50th=[42206], 99.90th=[50070], 99.95th=[50594], 00:17:19.644 | 99.99th=[50594] 00:17:19.644 bw ( KiB/s): min= 9200, max=11272, per=29.09%, avg=10236.00, stdev=1465.13, samples=2 00:17:19.644 iops : min= 2300, max= 2818, avg=2559.00, stdev=366.28, samples=2 00:17:19.644 lat (msec) : 2=0.04%, 10=0.55%, 20=7.22%, 50=92.02%, 100=0.17% 00:17:19.644 cpu : usr=3.05%, sys=2.36%, ctx=271, majf=0, minf=1 00:17:19.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:19.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.644 issued rwts: total=2175,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.644 job1: (groupid=0, jobs=1): err= 0: pid=3026782: Tue Jul 16 00:43:37 2024 00:17:19.644 read: IOPS=1549, BW=6197KiB/s (6345kB/s)(6364KiB/1027msec) 00:17:19.644 slat (usec): min=2, max=35194, avg=280.00, stdev=2165.74 00:17:19.644 clat (usec): min=18327, max=76043, avg=34759.53, stdev=11738.85 00:17:19.644 lat (usec): min=18333, max=76069, avg=35039.52, stdev=11898.82 00:17:19.644 clat percentiles (usec): 00:17:19.644 | 1.00th=[18482], 5.00th=[18482], 10.00th=[19006], 20.00th=[19530], 00:17:19.644 | 30.00th=[30016], 40.00th=[32113], 50.00th=[36439], 60.00th=[36439], 00:17:19.644 | 70.00th=[38536], 80.00th=[44303], 90.00th=[51643], 95.00th=[54264], 00:17:19.644 | 99.00th=[66847], 99.50th=[67634], 99.90th=[71828], 99.95th=[76022], 00:17:19.644 | 99.99th=[76022] 00:17:19.644 write: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec); 0 zone resets 00:17:19.644 slat (usec): min=5, max=24820, avg=259.07, stdev=1402.03 00:17:19.644 clat (usec): min=1482, max=84288, avg=36593.02, stdev=12465.20 00:17:19.644 lat (usec): min=1497, max=84299, avg=36852.09, stdev=12559.71 00:17:19.644 clat percentiles (usec): 00:17:19.644 | 1.00th=[10945], 5.00th=[11731], 10.00th=[23200], 20.00th=[31327], 00:17:19.644 | 30.00th=[34341], 40.00th=[36439], 50.00th=[36963], 60.00th=[37487], 00:17:19.644 | 70.00th=[37487], 80.00th=[38011], 90.00th=[51119], 95.00th=[65799], 00:17:19.644 | 99.00th=[80217], 99.50th=[82314], 99.90th=[84411], 99.95th=[84411], 00:17:19.644 | 99.99th=[84411] 00:17:19.644 bw ( KiB/s): min= 7616, max= 8192, per=22.46%, avg=7904.00, stdev=407.29, samples=2 00:17:19.644 iops : min= 1904, max= 2048, avg=1976.00, stdev=101.82, samples=2 00:17:19.644 lat (msec) : 2=0.22%, 20=13.69%, 50=74.55%, 100=11.54% 00:17:19.644 cpu : usr=1.95%, sys=3.31%, ctx=235, majf=0, minf=1 00:17:19.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:17:19.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.644 issued rwts: total=1591,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.644 job2: (groupid=0, jobs=1): err= 0: pid=3026805: Tue Jul 16 00:43:37 2024 00:17:19.644 read: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec) 00:17:19.644 slat (nsec): min=1944, max=26401k, avg=241144.21, stdev=1849743.35 00:17:19.644 clat (usec): min=10441, max=56266, avg=30527.20, stdev=5986.58 00:17:19.644 lat (usec): min=10448, max=56273, avg=30768.34, stdev=6166.37 00:17:19.644 clat percentiles (usec): 00:17:19.644 | 1.00th=[13566], 5.00th=[24249], 10.00th=[26870], 20.00th=[28181], 00:17:19.644 | 30.00th=[28705], 40.00th=[28967], 50.00th=[29230], 60.00th=[29492], 00:17:19.644 | 70.00th=[29754], 80.00th=[33424], 90.00th=[38011], 95.00th=[42730], 00:17:19.644 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:17:19.644 | 99.99th=[56361] 00:17:19.644 write: IOPS=2358, BW=9433KiB/s (9660kB/s)(9688KiB/1027msec); 0 zone resets 00:17:19.644 slat (usec): min=3, max=23855, avg=201.58, stdev=1596.89 00:17:19.644 clat (usec): min=6216, max=59151, avg=27867.65, stdev=6449.37 00:17:19.644 lat (usec): min=6226, max=59156, avg=28069.23, stdev=6653.78 00:17:19.644 clat percentiles (usec): 00:17:19.645 | 1.00th=[10159], 5.00th=[18744], 10.00th=[22414], 20.00th=[23987], 00:17:19.645 | 30.00th=[27395], 40.00th=[27919], 50.00th=[28443], 60.00th=[28443], 00:17:19.645 | 70.00th=[28705], 80.00th=[30802], 90.00th=[31065], 95.00th=[35390], 00:17:19.645 | 99.00th=[53740], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:17:19.645 | 99.99th=[58983] 00:17:19.645 bw ( KiB/s): min= 9048, max= 9312, per=26.09%, avg=9180.00, stdev=186.68, samples=2 00:17:19.645 iops : min= 2262, max= 2328, avg=2295.00, stdev=46.67, samples=2 00:17:19.645 lat (msec) : 10=0.45%, 20=4.59%, 50=93.11%, 100=1.86% 00:17:19.645 cpu : usr=2.83%, sys=2.73%, ctx=180, majf=0, minf=1 00:17:19.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:19.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.645 issued rwts: total=2048,2422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.645 job3: (groupid=0, jobs=1): err= 0: pid=3026813: Tue Jul 16 00:43:37 2024 00:17:19.645 read: IOPS=1491, BW=5965KiB/s (6108kB/s)(6144KiB/1030msec) 00:17:19.645 slat (usec): min=2, max=36268, avg=327.05, stdev=2514.16 00:17:19.645 clat (msec): min=10, max=112, avg=38.20, stdev=20.96 00:17:19.645 lat (msec): min=10, max=112, avg=38.53, stdev=21.10 00:17:19.645 clat percentiles (msec): 00:17:19.645 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 23], 00:17:19.645 | 30.00th=[ 23], 40.00th=[ 28], 50.00th=[ 32], 60.00th=[ 37], 00:17:19.645 | 70.00th=[ 40], 80.00th=[ 47], 90.00th=[ 71], 95.00th=[ 94], 00:17:19.645 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:17:19.645 | 99.99th=[ 112] 00:17:19.645 write: IOPS=1970, BW=7883KiB/s (8073kB/s)(8120KiB/1030msec); 0 zone resets 00:17:19.645 slat (usec): min=5, max=34288, avg=237.16, stdev=1443.33 00:17:19.645 clat (msec): min=6, max=112, avg=35.35, stdev=10.45 00:17:19.645 lat (msec): min=6, max=112, avg=35.59, stdev=10.53 00:17:19.645 clat percentiles (msec): 00:17:19.645 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 33], 00:17:19.645 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:17:19.645 | 70.00th=[ 38], 80.00th=[ 38], 90.00th=[ 42], 95.00th=[ 56], 00:17:19.645 | 99.00th=[ 66], 99.50th=[ 80], 99.90th=[ 85], 99.95th=[ 112], 00:17:19.645 | 99.99th=[ 112] 00:17:19.645 bw ( KiB/s): min= 7032, max= 8192, per=21.63%, avg=7612.00, stdev=820.24, samples=2 00:17:19.645 iops : min= 1758, max= 2048, avg=1903.00, stdev=205.06, samples=2 00:17:19.645 lat (msec) : 10=0.95%, 20=4.40%, 50=81.35%, 100=11.97%, 250=1.32% 00:17:19.645 cpu : usr=1.65%, sys=3.59%, ctx=226, majf=0, minf=1 00:17:19.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:17:19.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:19.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:19.645 issued rwts: total=1536,2030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:19.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:19.645 00:17:19.645 Run status group 0 (all jobs): 00:17:19.645 READ: bw=27.9MiB/s (29.2MB/s), 5965KiB/s-8546KiB/s (6108kB/s-8751kB/s), io=28.7MiB (30.1MB), run=1018-1030msec 00:17:19.645 WRITE: bw=34.4MiB/s (36.0MB/s), 7883KiB/s-9.82MiB/s (8073kB/s-10.3MB/s), io=35.4MiB (37.1MB), run=1018-1030msec 00:17:19.645 00:17:19.645 Disk stats (read/write): 00:17:19.645 nvme0n1: ios=1685/2048, merge=0/0, ticks=46553/49336, in_queue=95889, util=91.88% 00:17:19.645 nvme0n2: ios=1569/1536, merge=0/0, ticks=52074/49903, in_queue=101977, util=93.36% 00:17:19.645 nvme0n3: ios=1578/2047, merge=0/0, ticks=46355/54553, in_queue=100908, util=96.81% 00:17:19.645 nvme0n4: ios=1192/1536, merge=0/0, ticks=49621/52842, in_queue=102463, util=96.77% 00:17:19.645 00:43:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:19.645 [global] 00:17:19.645 thread=1 00:17:19.645 invalidate=1 00:17:19.645 rw=randwrite 00:17:19.645 time_based=1 00:17:19.645 runtime=1 00:17:19.645 ioengine=libaio 00:17:19.645 direct=1 00:17:19.645 bs=4096 00:17:19.645 iodepth=128 00:17:19.645 norandommap=0 00:17:19.645 numjobs=1 00:17:19.645 00:17:19.645 verify_dump=1 00:17:19.645 verify_backlog=512 00:17:19.645 verify_state_save=0 00:17:19.645 do_verify=1 00:17:19.645 verify=crc32c-intel 00:17:19.645 [job0] 00:17:19.645 filename=/dev/nvme0n1 00:17:19.645 [job1] 00:17:19.645 filename=/dev/nvme0n2 00:17:19.645 [job2] 00:17:19.645 filename=/dev/nvme0n3 00:17:19.645 [job3] 00:17:19.645 filename=/dev/nvme0n4 00:17:19.645 Could not set queue depth (nvme0n1) 00:17:19.645 Could not set queue depth (nvme0n2) 00:17:19.645 Could not set queue depth (nvme0n3) 00:17:19.645 Could not set queue depth (nvme0n4) 00:17:19.903 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.903 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.903 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.903 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:19.903 fio-3.35 00:17:19.903 Starting 4 threads 00:17:21.285 00:17:21.285 job0: (groupid=0, jobs=1): err= 0: pid=3027244: Tue Jul 16 00:43:38 2024 00:17:21.285 read: IOPS=1130, BW=4524KiB/s (4632kB/s)(4596KiB/1016msec) 00:17:21.285 slat (usec): min=3, max=54388, avg=404.80, stdev=3426.18 00:17:21.285 clat (msec): min=2, max=126, avg=49.42, stdev=15.75 00:17:21.285 lat (msec): min=25, max=126, avg=49.82, stdev=16.07 00:17:21.285 clat percentiles (msec): 00:17:21.285 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 41], 00:17:21.285 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 48], 00:17:21.285 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 72], 95.00th=[ 73], 00:17:21.285 | 99.00th=[ 81], 99.50th=[ 89], 99.90th=[ 122], 99.95th=[ 127], 00:17:21.285 | 99.99th=[ 127] 00:17:21.285 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:17:21.285 slat (usec): min=5, max=41205, avg=332.94, stdev=2236.99 00:17:21.285 clat (usec): min=2796, max=94590, avg=46854.16, stdev=19575.19 00:17:21.285 lat (usec): min=2810, max=94611, avg=47187.10, stdev=19808.55 00:17:21.285 clat percentiles (usec): 00:17:21.285 | 1.00th=[14746], 5.00th=[15008], 10.00th=[18482], 20.00th=[22152], 00:17:21.285 | 30.00th=[36963], 40.00th=[43254], 50.00th=[51119], 60.00th=[53740], 00:17:21.285 | 70.00th=[58459], 80.00th=[67634], 90.00th=[70779], 95.00th=[76022], 00:17:21.285 | 99.00th=[79168], 99.50th=[81265], 99.90th=[92799], 99.95th=[94897], 00:17:21.285 | 99.99th=[94897] 00:17:21.285 bw ( KiB/s): min= 4528, max= 7736, per=20.32%, avg=6132.00, stdev=2268.40, samples=2 00:17:21.285 iops : min= 1132, max= 1934, avg=1533.00, stdev=567.10, samples=2 00:17:21.285 lat (msec) : 4=0.11%, 10=0.22%, 20=5.85%, 50=47.19%, 100=46.55% 00:17:21.285 lat (msec) : 250=0.07% 00:17:21.285 cpu : usr=0.99%, sys=2.76%, ctx=138, majf=0, minf=1 00:17:21.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:17:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.285 issued rwts: total=1149,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.285 job1: (groupid=0, jobs=1): err= 0: pid=3027253: Tue Jul 16 00:43:38 2024 00:17:21.285 read: IOPS=2104, BW=8416KiB/s (8618kB/s)(8576KiB/1019msec) 00:17:21.285 slat (nsec): min=1939, max=37735k, avg=162634.94, stdev=1727770.76 00:17:21.285 clat (usec): min=1605, max=90313, avg=22679.18, stdev=17139.84 00:17:21.285 lat (usec): min=1615, max=90337, avg=22841.82, stdev=17317.63 00:17:21.285 clat percentiles (usec): 00:17:21.285 | 1.00th=[ 3130], 5.00th=[ 5014], 10.00th=[ 6456], 20.00th=[11600], 00:17:21.285 | 30.00th=[12518], 40.00th=[13960], 50.00th=[14353], 60.00th=[18744], 00:17:21.285 | 70.00th=[24511], 80.00th=[42730], 90.00th=[51119], 95.00th=[54264], 00:17:21.285 | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:17:21.285 | 99.99th=[90702] 00:17:21.285 write: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec); 0 zone resets 00:17:21.285 slat (usec): min=3, max=47355, avg=214.66, stdev=1880.14 00:17:21.285 clat (usec): min=613, max=116043, avg=31440.81, stdev=26617.26 00:17:21.285 lat (usec): min=624, max=116052, avg=31655.48, stdev=26822.30 00:17:21.285 clat percentiles (usec): 00:17:21.285 | 1.00th=[ 1172], 5.00th=[ 5407], 10.00th=[ 8717], 20.00th=[ 12780], 00:17:21.285 | 30.00th=[ 13829], 40.00th=[ 14877], 50.00th=[ 18220], 60.00th=[ 24511], 00:17:21.285 | 70.00th=[ 38011], 80.00th=[ 54264], 90.00th=[ 73925], 95.00th=[ 83362], 00:17:21.285 | 99.00th=[109577], 99.50th=[112722], 99.90th=[115868], 99.95th=[115868], 00:17:21.285 | 99.99th=[115868] 00:17:21.285 bw ( KiB/s): min= 4848, max=15384, per=33.53%, avg=10116.00, stdev=7450.08, samples=2 00:17:21.285 iops : min= 1212, max= 3846, avg=2529.00, stdev=1862.52, samples=2 00:17:21.285 lat (usec) : 750=0.06%, 1000=0.11% 00:17:21.285 lat (msec) : 2=1.34%, 4=2.19%, 10=12.31%, 20=40.86%, 50=23.51% 00:17:21.285 lat (msec) : 100=18.45%, 250=1.17% 00:17:21.285 cpu : usr=1.87%, sys=3.34%, ctx=222, majf=0, minf=1 00:17:21.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.285 issued rwts: total=2144,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.285 job2: (groupid=0, jobs=1): err= 0: pid=3027268: Tue Jul 16 00:43:38 2024 00:17:21.285 read: IOPS=2220, BW=8883KiB/s (9096kB/s)(9096KiB/1024msec) 00:17:21.285 slat (nsec): min=1854, max=26836k, avg=233050.31, stdev=1722111.78 00:17:21.285 clat (usec): min=7764, max=55148, avg=28287.57, stdev=6715.30 00:17:21.285 lat (usec): min=8787, max=57180, avg=28520.62, stdev=6838.07 00:17:21.285 clat percentiles (usec): 00:17:21.285 | 1.00th=[10028], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:17:21.285 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:17:21.285 | 70.00th=[28181], 80.00th=[31851], 90.00th=[38011], 95.00th=[43779], 00:17:21.286 | 99.00th=[49546], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:17:21.286 | 99.99th=[55313] 00:17:21.286 write: IOPS=2500, BW=9.77MiB/s (10.2MB/s)(10.0MiB/1024msec); 0 zone resets 00:17:21.286 slat (usec): min=3, max=25637, avg=182.87, stdev=1297.02 00:17:21.286 clat (usec): min=5214, max=55120, avg=25585.35, stdev=5108.26 00:17:21.286 lat (usec): min=5225, max=55125, avg=25768.22, stdev=5278.31 00:17:21.286 clat percentiles (usec): 00:17:21.286 | 1.00th=[ 6718], 5.00th=[13304], 10.00th=[21627], 20.00th=[23987], 00:17:21.286 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26608], 00:17:21.286 | 70.00th=[27919], 80.00th=[29230], 90.00th=[29754], 95.00th=[30540], 00:17:21.286 | 99.00th=[30802], 99.50th=[44827], 99.90th=[53740], 99.95th=[55313], 00:17:21.286 | 99.99th=[55313] 00:17:21.286 bw ( KiB/s): min= 8976, max=11504, per=33.94%, avg=10240.00, stdev=1787.57, samples=2 00:17:21.286 iops : min= 2244, max= 2876, avg=2560.00, stdev=446.89, samples=2 00:17:21.286 lat (msec) : 10=2.03%, 20=3.97%, 50=93.44%, 100=0.56% 00:17:21.286 cpu : usr=2.83%, sys=3.03%, ctx=273, majf=0, minf=1 00:17:21.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.286 issued rwts: total=2274,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.286 job3: (groupid=0, jobs=1): err= 0: pid=3027273: Tue Jul 16 00:43:38 2024 00:17:21.286 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec) 00:17:21.286 slat (usec): min=3, max=92297, avg=589.87, stdev=4281.78 00:17:21.286 clat (msec): min=8, max=192, avg=65.97, stdev=36.22 00:17:21.286 lat (msec): min=19, max=192, avg=66.56, stdev=36.53 00:17:21.286 clat percentiles (msec): 00:17:21.286 | 1.00th=[ 20], 5.00th=[ 21], 10.00th=[ 21], 20.00th=[ 35], 00:17:21.286 | 30.00th=[ 39], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 69], 00:17:21.286 | 70.00th=[ 74], 80.00th=[ 96], 90.00th=[ 117], 95.00th=[ 146], 00:17:21.286 | 99.00th=[ 163], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 192], 00:17:21.286 | 99.99th=[ 192] 00:17:21.286 write: IOPS=1053, BW=4213KiB/s (4314kB/s)(4272KiB/1014msec); 0 zone resets 00:17:21.286 slat (usec): min=4, max=66642, avg=351.95, stdev=2397.33 00:17:21.286 clat (usec): min=1631, max=212700, avg=57044.20, stdev=43646.81 00:17:21.286 lat (usec): min=1646, max=212722, avg=57396.16, stdev=43873.60 00:17:21.286 clat percentiles (msec): 00:17:21.286 | 1.00th=[ 14], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:17:21.286 | 30.00th=[ 17], 40.00th=[ 42], 50.00th=[ 64], 60.00th=[ 68], 00:17:21.286 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 115], 95.00th=[ 161], 00:17:21.286 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 197], 99.95th=[ 213], 00:17:21.286 | 99.99th=[ 213] 00:17:21.286 bw ( KiB/s): min= 4096, max= 4096, per=13.58%, avg=4096.00, stdev= 0.00, samples=2 00:17:21.286 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:17:21.286 lat (msec) : 2=0.14%, 10=0.10%, 20=19.65%, 50=20.89%, 100=43.93% 00:17:21.286 lat (msec) : 250=15.30% 00:17:21.286 cpu : usr=0.79%, sys=1.58%, ctx=129, majf=0, minf=1 00:17:21.286 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:17:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.286 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.286 issued rwts: total=1024,1068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.286 00:17:21.286 Run status group 0 (all jobs): 00:17:21.286 READ: bw=25.1MiB/s (26.4MB/s), 4039KiB/s-8883KiB/s (4136kB/s-9096kB/s), io=25.7MiB (27.0MB), run=1014-1024msec 00:17:21.286 WRITE: bw=29.5MiB/s (30.9MB/s), 4213KiB/s-9.81MiB/s (4314kB/s-10.3MB/s), io=30.2MiB (31.6MB), run=1014-1024msec 00:17:21.286 00:17:21.286 Disk stats (read/write): 00:17:21.286 nvme0n1: ios=931/1024, merge=0/0, ticks=47198/58593, in_queue=105791, util=87.88% 00:17:21.286 nvme0n2: ios=1585/1956, merge=0/0, ticks=38576/67248, in_queue=105824, util=94.21% 00:17:21.286 nvme0n3: ios=2066/2048, merge=0/0, ticks=55574/50290, in_queue=105864, util=95.63% 00:17:21.286 nvme0n4: ios=621/1024, merge=0/0, ticks=28144/27431, in_queue=55575, util=99.69% 00:17:21.286 00:43:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:21.286 00:43:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3027429 00:17:21.286 00:43:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:21.286 00:43:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:21.286 [global] 00:17:21.286 thread=1 00:17:21.286 invalidate=1 00:17:21.286 rw=read 00:17:21.286 time_based=1 00:17:21.286 runtime=10 00:17:21.286 ioengine=libaio 00:17:21.286 direct=1 00:17:21.286 bs=4096 00:17:21.286 iodepth=1 00:17:21.286 norandommap=1 00:17:21.286 numjobs=1 00:17:21.286 00:17:21.286 [job0] 00:17:21.286 filename=/dev/nvme0n1 00:17:21.286 [job1] 00:17:21.286 filename=/dev/nvme0n2 00:17:21.286 [job2] 00:17:21.286 filename=/dev/nvme0n3 00:17:21.286 [job3] 00:17:21.286 filename=/dev/nvme0n4 00:17:21.286 Could not set queue depth (nvme0n1) 00:17:21.286 Could not set queue depth (nvme0n2) 00:17:21.286 Could not set queue depth (nvme0n3) 00:17:21.286 Could not set queue depth (nvme0n4) 00:17:21.544 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:21.544 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:21.544 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:21.544 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:21.544 fio-3.35 00:17:21.544 Starting 4 threads 00:17:24.082 00:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:24.339 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:24.339 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=13950976, buflen=4096 00:17:24.339 fio: pid=3027744, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.596 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.596 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:24.596 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=495616, buflen=4096 00:17:24.596 fio: pid=3027735, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.854 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=319488, buflen=4096 00:17:24.854 fio: pid=3027700, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:24.854 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:24.854 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:25.112 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29405184, buflen=4096 00:17:25.112 fio: pid=3027714, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:25.112 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.112 00:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:25.112 00:17:25.112 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3027700: Tue Jul 16 00:43:42 2024 00:17:25.112 read: IOPS=24, BW=97.2KiB/s (99.5kB/s)(312KiB/3210msec) 00:17:25.112 slat (usec): min=9, max=8694, avg=204.74, stdev=1162.76 00:17:25.112 clat (usec): min=392, max=42548, avg=40492.12, stdev=6550.22 00:17:25.112 lat (usec): min=414, max=50059, avg=40625.68, stdev=6637.24 00:17:25.112 clat percentiles (usec): 00:17:25.112 | 1.00th=[ 392], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:25.112 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:17:25.112 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:25.112 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:25.112 | 99.99th=[42730] 00:17:25.112 bw ( KiB/s): min= 96, max= 104, per=0.78%, avg=97.33, stdev= 3.27, samples=6 00:17:25.112 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:17:25.112 lat (usec) : 500=1.27%, 750=1.27% 00:17:25.112 lat (msec) : 50=96.20% 00:17:25.112 cpu : usr=0.09%, sys=0.00%, ctx=83, majf=0, minf=1 00:17:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.112 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3027714: Tue Jul 16 00:43:42 2024 00:17:25.112 read: IOPS=2064, BW=8256KiB/s (8455kB/s)(28.0MiB/3478msec) 00:17:25.112 slat (usec): min=6, max=27596, avg=17.76, stdev=417.06 00:17:25.112 clat (usec): min=340, max=42281, avg=462.23, stdev=1090.73 00:17:25.112 lat (usec): min=347, max=50633, avg=480.00, stdev=1210.83 00:17:25.112 clat percentiles (usec): 00:17:25.112 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 392], 00:17:25.112 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:17:25.112 | 70.00th=[ 441], 80.00th=[ 453], 90.00th=[ 502], 95.00th=[ 578], 00:17:25.112 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 1205], 99.95th=[41157], 00:17:25.112 | 99.99th=[42206] 00:17:25.112 bw ( KiB/s): min= 6992, max= 9528, per=71.25%, avg=8837.83, stdev=948.45, samples=6 00:17:25.112 iops : min= 1748, max= 2382, avg=2209.33, stdev=237.01, samples=6 00:17:25.112 lat (usec) : 500=89.93%, 750=9.86%, 1000=0.06% 00:17:25.112 lat (msec) : 2=0.06%, 10=0.01%, 50=0.07% 00:17:25.112 cpu : usr=0.83%, sys=1.81%, ctx=7188, majf=0, minf=1 00:17:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 issued rwts: total=7180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.112 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3027735: Tue Jul 16 00:43:42 2024 00:17:25.112 read: IOPS=40, BW=163KiB/s (166kB/s)(484KiB/2977msec) 00:17:25.112 slat (usec): min=4, max=14687, avg=133.47, stdev=1328.58 00:17:25.112 clat (usec): min=327, max=42025, avg=24220.41, stdev=20219.77 00:17:25.112 lat (usec): min=335, max=55921, avg=24354.80, stdev=20369.87 00:17:25.112 clat percentiles (usec): 00:17:25.112 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 383], 20.00th=[ 408], 00:17:25.112 | 30.00th=[ 474], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41157], 00:17:25.112 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:25.112 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:25.112 | 99.99th=[42206] 00:17:25.112 bw ( KiB/s): min= 96, max= 480, per=1.39%, avg=172.80, stdev=171.73, samples=5 00:17:25.112 iops : min= 24, max= 120, avg=43.20, stdev=42.93, samples=5 00:17:25.112 lat (usec) : 500=33.61%, 750=5.74% 00:17:25.112 lat (msec) : 2=0.82%, 4=0.82%, 10=0.82%, 50=57.38% 00:17:25.112 cpu : usr=0.00%, sys=0.10%, ctx=125, majf=0, minf=1 00:17:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.112 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3027744: Tue Jul 16 00:43:42 2024 00:17:25.112 read: IOPS=1262, BW=5048KiB/s (5169kB/s)(13.3MiB/2699msec) 00:17:25.112 slat (nsec): min=6309, max=44552, avg=8691.46, stdev=3975.27 00:17:25.112 clat (usec): min=408, max=42356, avg=775.75, stdev=3357.56 00:17:25.112 lat (usec): min=416, max=42378, avg=784.44, stdev=3358.69 00:17:25.112 clat percentiles (usec): 00:17:25.112 | 1.00th=[ 429], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 457], 00:17:25.112 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 490], 00:17:25.112 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 578], 95.00th=[ 660], 00:17:25.112 | 99.00th=[ 840], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:25.112 | 99.99th=[42206] 00:17:25.112 bw ( KiB/s): min= 2520, max= 8128, per=43.87%, avg=5441.60, stdev=2508.10, samples=5 00:17:25.112 iops : min= 630, max= 2032, avg=1360.40, stdev=627.02, samples=5 00:17:25.112 lat (usec) : 500=69.00%, 750=29.88%, 1000=0.35% 00:17:25.112 lat (msec) : 2=0.03%, 4=0.03%, 50=0.68% 00:17:25.112 cpu : usr=0.48%, sys=1.74%, ctx=3407, majf=0, minf=2 00:17:25.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.112 issued rwts: total=3407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:25.112 00:17:25.112 Run status group 0 (all jobs): 00:17:25.112 READ: bw=12.1MiB/s (12.7MB/s), 97.2KiB/s-8256KiB/s (99.5kB/s-8455kB/s), io=42.1MiB (44.2MB), run=2699-3478msec 00:17:25.112 00:17:25.112 Disk stats (read/write): 00:17:25.112 nvme0n1: ios=107/0, merge=0/0, ticks=3625/0, in_queue=3625, util=98.77% 00:17:25.112 nvme0n2: ios=7176/0, merge=0/0, ticks=3142/0, in_queue=3142, util=93.79% 00:17:25.112 nvme0n3: ios=160/0, merge=0/0, ticks=3014/0, in_queue=3014, util=98.71% 00:17:25.112 nvme0n4: ios=3403/0, merge=0/0, ticks=2489/0, in_queue=2489, util=96.45% 00:17:25.370 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.370 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:25.628 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.628 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:25.886 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:25.886 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:26.143 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.143 00:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:26.400 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:26.400 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3027429 00:17:26.400 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:26.400 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:26.657 nvmf hotplug test: fio failed as expected 00:17:26.657 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.915 rmmod nvme_tcp 00:17:26.915 rmmod nvme_fabrics 00:17:26.915 rmmod nvme_keyring 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3024194 ']' 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3024194 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3024194 ']' 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3024194 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:26.915 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3024194 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3024194' 00:17:27.173 killing process with pid 3024194 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3024194 00:17:27.173 00:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3024194 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.173 00:43:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.709 00:43:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:29.709 00:17:29.709 real 0m29.033s 00:17:29.709 user 2m25.942s 00:17:29.709 sys 0m8.309s 00:17:29.709 00:43:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:29.709 00:43:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.709 ************************************ 00:17:29.709 END TEST nvmf_fio_target 00:17:29.709 ************************************ 00:17:29.709 00:43:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:29.709 00:43:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:29.709 00:43:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:29.709 00:43:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.709 00:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:29.709 ************************************ 00:17:29.709 START TEST nvmf_bdevio 00:17:29.709 ************************************ 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:29.709 * Looking for test storage... 00:17:29.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:29.709 00:43:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:36.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:36.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:36.276 Found net devices under 0000:af:00.0: cvl_0_0 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:36.276 Found net devices under 0000:af:00.1: cvl_0_1 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.276 00:43:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:17:36.276 00:17:36.276 --- 10.0.0.2 ping statistics --- 00:17:36.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.276 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:17:36.276 00:17:36.276 --- 10.0.0.1 ping statistics --- 00:17:36.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.276 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3032373 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3032373 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3032373 ']' 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.276 00:43:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.276 [2024-07-16 00:43:53.209134] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:36.276 [2024-07-16 00:43:53.209189] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.276 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.276 [2024-07-16 00:43:53.330523] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.277 [2024-07-16 00:43:53.476629] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.277 [2024-07-16 00:43:53.476700] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.277 [2024-07-16 00:43:53.476722] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.277 [2024-07-16 00:43:53.476739] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.277 [2024-07-16 00:43:53.476755] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.277 [2024-07-16 00:43:53.476899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.277 [2024-07-16 00:43:53.476998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.277 [2024-07-16 00:43:53.477094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.277 [2024-07-16 00:43:53.477099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 [2024-07-16 00:43:54.198381] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 Malloc0 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.553 [2024-07-16 00:43:54.262744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:36.553 { 00:17:36.553 "params": { 00:17:36.553 "name": "Nvme$subsystem", 00:17:36.553 "trtype": "$TEST_TRANSPORT", 00:17:36.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.553 "adrfam": "ipv4", 00:17:36.553 "trsvcid": "$NVMF_PORT", 00:17:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.553 "hdgst": ${hdgst:-false}, 00:17:36.553 "ddgst": ${ddgst:-false} 00:17:36.553 }, 00:17:36.553 "method": "bdev_nvme_attach_controller" 00:17:36.553 } 00:17:36.553 EOF 00:17:36.553 )") 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:36.553 00:43:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:36.553 "params": { 00:17:36.553 "name": "Nvme1", 00:17:36.553 "trtype": "tcp", 00:17:36.553 "traddr": "10.0.0.2", 00:17:36.553 "adrfam": "ipv4", 00:17:36.553 "trsvcid": "4420", 00:17:36.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.553 "hdgst": false, 00:17:36.553 "ddgst": false 00:17:36.553 }, 00:17:36.553 "method": "bdev_nvme_attach_controller" 00:17:36.553 }' 00:17:36.553 [2024-07-16 00:43:54.317234] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:17:36.553 [2024-07-16 00:43:54.317298] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032615 ] 00:17:36.553 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.875 [2024-07-16 00:43:54.400131] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.875 [2024-07-16 00:43:54.489276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.875 [2024-07-16 00:43:54.489345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.875 [2024-07-16 00:43:54.489346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.134 I/O targets: 00:17:37.134 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:37.134 00:17:37.134 00:17:37.134 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.134 http://cunit.sourceforge.net/ 00:17:37.134 00:17:37.134 00:17:37.134 Suite: bdevio tests on: Nvme1n1 00:17:37.134 Test: blockdev write read block ...passed 00:17:37.134 Test: blockdev write zeroes read block ...passed 00:17:37.134 Test: blockdev write zeroes read no split ...passed 00:17:37.391 Test: blockdev write zeroes read split ...passed 00:17:37.391 Test: blockdev write zeroes read split partial ...passed 00:17:37.391 Test: blockdev reset ...[2024-07-16 00:43:55.017921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:37.391 [2024-07-16 00:43:55.018000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bfdd0 (9): Bad file descriptor 00:17:37.391 [2024-07-16 00:43:55.081786] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:37.391 passed 00:17:37.391 Test: blockdev write read 8 blocks ...passed 00:17:37.391 Test: blockdev write read size > 128k ...passed 00:17:37.391 Test: blockdev write read invalid size ...passed 00:17:37.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:37.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:37.391 Test: blockdev write read max offset ...passed 00:17:37.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:37.649 Test: blockdev writev readv 8 blocks ...passed 00:17:37.649 Test: blockdev writev readv 30 x 1block ...passed 00:17:37.649 Test: blockdev writev readv block ...passed 00:17:37.649 Test: blockdev writev readv size > 128k ...passed 00:17:37.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:37.649 Test: blockdev comparev and writev ...[2024-07-16 00:43:55.345384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.345447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.345490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.345514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.346206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.346239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.346288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.346311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.346983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.347014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.347050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.347072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.347717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.347749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.347787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.649 [2024-07-16 00:43:55.347808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:37.649 passed 00:17:37.649 Test: blockdev nvme passthru rw ...passed 00:17:37.649 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:43:55.430782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.649 [2024-07-16 00:43:55.430822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.431086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.649 [2024-07-16 00:43:55.431117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.431399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.649 [2024-07-16 00:43:55.431430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:37.649 [2024-07-16 00:43:55.431695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.649 [2024-07-16 00:43:55.431726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:37.649 passed 00:17:37.649 Test: blockdev nvme admin passthru ...passed 00:17:37.907 Test: blockdev copy ...passed 00:17:37.907 00:17:37.907 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.907 suites 1 1 n/a 0 0 00:17:37.907 tests 23 23 23 0 0 00:17:37.907 asserts 152 152 152 0 n/a 00:17:37.907 00:17:37.907 Elapsed time = 1.270 seconds 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.907 rmmod nvme_tcp 00:17:37.907 rmmod nvme_fabrics 00:17:37.907 rmmod nvme_keyring 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3032373 ']' 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3032373 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3032373 ']' 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3032373 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.907 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3032373 00:17:38.166 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:38.166 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:38.166 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3032373' 00:17:38.166 killing process with pid 3032373 00:17:38.166 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3032373 00:17:38.166 00:43:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3032373 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.425 00:43:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.956 00:43:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.956 00:17:40.956 real 0m11.030s 00:17:40.956 user 0m14.771s 00:17:40.956 sys 0m5.117s 00:17:40.956 00:43:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.956 00:43:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.956 ************************************ 00:17:40.956 END TEST nvmf_bdevio 00:17:40.956 ************************************ 00:17:40.956 00:43:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.956 00:43:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:40.956 00:43:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.956 00:43:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.956 00:43:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.956 ************************************ 00:17:40.956 START TEST nvmf_auth_target 00:17:40.956 ************************************ 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:40.956 * Looking for test storage... 00:17:40.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.956 00:43:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.957 00:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.224 00:44:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.224 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:46.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:46.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:46.225 Found net devices under 0000:af:00.0: cvl_0_0 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:46.225 Found net devices under 0000:af:00.1: cvl_0_1 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.225 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:46.488 00:17:46.488 --- 10.0.0.2 ping statistics --- 00:17:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.488 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:17:46.488 00:17:46.488 --- 10.0.0.1 ping statistics --- 00:17:46.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.488 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3036401 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3036401 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3036401 ']' 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.488 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.746 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.746 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3036551 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0907158be86d9e0f53605b4247730f69d6264f9509fa9700 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.VUn 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0907158be86d9e0f53605b4247730f69d6264f9509fa9700 0 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0907158be86d9e0f53605b4247730f69d6264f9509fa9700 0 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0907158be86d9e0f53605b4247730f69d6264f9509fa9700 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.VUn 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.VUn 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.VUn 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aeafdff11754950a06589a28c0d0aa64933643c4bdf4c9876e3d4b251b46b82c 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9Bw 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aeafdff11754950a06589a28c0d0aa64933643c4bdf4c9876e3d4b251b46b82c 3 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aeafdff11754950a06589a28c0d0aa64933643c4bdf4c9876e3d4b251b46b82c 3 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aeafdff11754950a06589a28c0d0aa64933643c4bdf4c9876e3d4b251b46b82c 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9Bw 00:17:47.005 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9Bw 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.9Bw 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e60bb8a4386844d07b096c988f3b433 00:17:47.006 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ry9 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e60bb8a4386844d07b096c988f3b433 1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e60bb8a4386844d07b096c988f3b433 1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e60bb8a4386844d07b096c988f3b433 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ry9 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ry9 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ry9 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e1a5b3e1e485188a936c6584f47fddd70b87641075deba3 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.77I 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e1a5b3e1e485188a936c6584f47fddd70b87641075deba3 2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e1a5b3e1e485188a936c6584f47fddd70b87641075deba3 2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e1a5b3e1e485188a936c6584f47fddd70b87641075deba3 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.77I 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.77I 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.77I 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=07236069f442a25ebf2717cb5f54957d332f4f3137456cc3 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rGU 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 07236069f442a25ebf2717cb5f54957d332f4f3137456cc3 2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 07236069f442a25ebf2717cb5f54957d332f4f3137456cc3 2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=07236069f442a25ebf2717cb5f54957d332f4f3137456cc3 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.265 00:44:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rGU 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rGU 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.rGU 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=107e8c394b51a70e0bb2dd53d3c92966 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dc7 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 107e8c394b51a70e0bb2dd53d3c92966 1 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 107e8c394b51a70e0bb2dd53d3c92966 1 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=107e8c394b51a70e0bb2dd53d3c92966 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.265 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dc7 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dc7 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.dc7 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a706bd2976a3b08c88eaefa05cf3b65ffe2e2c46d45275d4c4792f09d3e22e80 00:17:47.524 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XjQ 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a706bd2976a3b08c88eaefa05cf3b65ffe2e2c46d45275d4c4792f09d3e22e80 3 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a706bd2976a3b08c88eaefa05cf3b65ffe2e2c46d45275d4c4792f09d3e22e80 3 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a706bd2976a3b08c88eaefa05cf3b65ffe2e2c46d45275d4c4792f09d3e22e80 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XjQ 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XjQ 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.XjQ 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3036401 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3036401 ']' 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.525 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3036551 /var/tmp/host.sock 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3036551 ']' 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.783 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VUn 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.VUn 00:17:48.041 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.VUn 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.9Bw ]] 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Bw 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Bw 00:17:48.300 00:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Bw 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ry9 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ry9 00:17:48.558 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ry9 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.77I ]] 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.77I 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.77I 00:17:48.816 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.77I 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rGU 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rGU 00:17:49.075 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rGU 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.dc7 ]] 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dc7 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dc7 00:17:49.334 00:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dc7 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XjQ 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XjQ 00:17:49.334 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XjQ 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.593 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.853 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.112 00:17:50.112 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.112 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.112 00:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.371 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.371 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.371 00:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.371 00:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.372 00:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.372 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.372 { 00:17:50.372 "cntlid": 1, 00:17:50.372 "qid": 0, 00:17:50.372 "state": "enabled", 00:17:50.372 "thread": "nvmf_tgt_poll_group_000", 00:17:50.372 "listen_address": { 00:17:50.372 "trtype": "TCP", 00:17:50.372 "adrfam": "IPv4", 00:17:50.372 "traddr": "10.0.0.2", 00:17:50.372 "trsvcid": "4420" 00:17:50.372 }, 00:17:50.372 "peer_address": { 00:17:50.372 "trtype": "TCP", 00:17:50.372 "adrfam": "IPv4", 00:17:50.372 "traddr": "10.0.0.1", 00:17:50.372 "trsvcid": "47864" 00:17:50.372 }, 00:17:50.372 "auth": { 00:17:50.372 "state": "completed", 00:17:50.372 "digest": "sha256", 00:17:50.372 "dhgroup": "null" 00:17:50.372 } 00:17:50.372 } 00:17:50.372 ]' 00:17:50.372 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.631 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.890 00:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.829 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.398 00:17:52.398 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.398 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.398 00:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.398 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.398 { 00:17:52.398 "cntlid": 3, 00:17:52.398 "qid": 0, 00:17:52.398 "state": "enabled", 00:17:52.398 "thread": "nvmf_tgt_poll_group_000", 00:17:52.398 "listen_address": { 00:17:52.398 "trtype": "TCP", 00:17:52.398 "adrfam": "IPv4", 00:17:52.398 "traddr": "10.0.0.2", 00:17:52.398 "trsvcid": "4420" 00:17:52.398 }, 00:17:52.398 "peer_address": { 00:17:52.398 "trtype": "TCP", 00:17:52.398 "adrfam": "IPv4", 00:17:52.398 "traddr": "10.0.0.1", 00:17:52.398 "trsvcid": "47878" 00:17:52.398 }, 00:17:52.398 "auth": { 00:17:52.398 "state": "completed", 00:17:52.398 "digest": "sha256", 00:17:52.398 "dhgroup": "null" 00:17:52.398 } 00:17:52.398 } 00:17:52.398 ]' 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.657 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.917 00:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.854 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.113 00:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.680 00:17:54.680 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.680 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.680 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.938 { 00:17:54.938 "cntlid": 5, 00:17:54.938 "qid": 0, 00:17:54.938 "state": "enabled", 00:17:54.938 "thread": "nvmf_tgt_poll_group_000", 00:17:54.938 "listen_address": { 00:17:54.938 "trtype": "TCP", 00:17:54.938 "adrfam": "IPv4", 00:17:54.938 "traddr": "10.0.0.2", 00:17:54.938 "trsvcid": "4420" 00:17:54.938 }, 00:17:54.938 "peer_address": { 00:17:54.938 "trtype": "TCP", 00:17:54.938 "adrfam": "IPv4", 00:17:54.938 "traddr": "10.0.0.1", 00:17:54.938 "trsvcid": "53580" 00:17:54.938 }, 00:17:54.938 "auth": { 00:17:54.938 "state": "completed", 00:17:54.938 "digest": "sha256", 00:17:54.938 "dhgroup": "null" 00:17:54.938 } 00:17:54.938 } 00:17:54.938 ]' 00:17:54.938 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.196 00:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.454 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.385 00:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.385 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.644 00:17:56.644 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.644 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.644 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.902 { 00:17:56.902 "cntlid": 7, 00:17:56.902 "qid": 0, 00:17:56.902 "state": "enabled", 00:17:56.902 "thread": "nvmf_tgt_poll_group_000", 00:17:56.902 "listen_address": { 00:17:56.902 "trtype": "TCP", 00:17:56.902 "adrfam": "IPv4", 00:17:56.902 "traddr": "10.0.0.2", 00:17:56.902 "trsvcid": "4420" 00:17:56.902 }, 00:17:56.902 "peer_address": { 00:17:56.902 "trtype": "TCP", 00:17:56.902 "adrfam": "IPv4", 00:17:56.902 "traddr": "10.0.0.1", 00:17:56.902 "trsvcid": "53614" 00:17:56.902 }, 00:17:56.902 "auth": { 00:17:56.902 "state": "completed", 00:17:56.902 "digest": "sha256", 00:17:56.902 "dhgroup": "null" 00:17:56.902 } 00:17:56.902 } 00:17:56.902 ]' 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.902 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.161 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:57.161 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.162 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.162 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.162 00:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.421 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.359 00:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.359 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.618 00:17:58.618 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.618 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.618 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.878 { 00:17:58.878 "cntlid": 9, 00:17:58.878 "qid": 0, 00:17:58.878 "state": "enabled", 00:17:58.878 "thread": "nvmf_tgt_poll_group_000", 00:17:58.878 "listen_address": { 00:17:58.878 "trtype": "TCP", 00:17:58.878 "adrfam": "IPv4", 00:17:58.878 "traddr": "10.0.0.2", 00:17:58.878 "trsvcid": "4420" 00:17:58.878 }, 00:17:58.878 "peer_address": { 00:17:58.878 "trtype": "TCP", 00:17:58.878 "adrfam": "IPv4", 00:17:58.878 "traddr": "10.0.0.1", 00:17:58.878 "trsvcid": "53656" 00:17:58.878 }, 00:17:58.878 "auth": { 00:17:58.878 "state": "completed", 00:17:58.878 "digest": "sha256", 00:17:58.878 "dhgroup": "ffdhe2048" 00:17:58.878 } 00:17:58.878 } 00:17:58.878 ]' 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.878 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.137 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.137 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.137 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.137 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.137 00:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.395 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.333 00:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.333 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.592 00:18:00.852 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.852 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.852 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.111 { 00:18:01.111 "cntlid": 11, 00:18:01.111 "qid": 0, 00:18:01.111 "state": "enabled", 00:18:01.111 "thread": "nvmf_tgt_poll_group_000", 00:18:01.111 "listen_address": { 00:18:01.111 "trtype": "TCP", 00:18:01.111 "adrfam": "IPv4", 00:18:01.111 "traddr": "10.0.0.2", 00:18:01.111 "trsvcid": "4420" 00:18:01.111 }, 00:18:01.111 "peer_address": { 00:18:01.111 "trtype": "TCP", 00:18:01.111 "adrfam": "IPv4", 00:18:01.111 "traddr": "10.0.0.1", 00:18:01.111 "trsvcid": "53688" 00:18:01.111 }, 00:18:01.111 "auth": { 00:18:01.111 "state": "completed", 00:18:01.111 "digest": "sha256", 00:18:01.111 "dhgroup": "ffdhe2048" 00:18:01.111 } 00:18:01.111 } 00:18:01.111 ]' 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.111 00:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.370 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.309 00:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.568 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.827 00:18:02.827 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.827 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.827 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.086 { 00:18:03.086 "cntlid": 13, 00:18:03.086 "qid": 0, 00:18:03.086 "state": "enabled", 00:18:03.086 "thread": "nvmf_tgt_poll_group_000", 00:18:03.086 "listen_address": { 00:18:03.086 "trtype": "TCP", 00:18:03.086 "adrfam": "IPv4", 00:18:03.086 "traddr": "10.0.0.2", 00:18:03.086 "trsvcid": "4420" 00:18:03.086 }, 00:18:03.086 "peer_address": { 00:18:03.086 "trtype": "TCP", 00:18:03.086 "adrfam": "IPv4", 00:18:03.086 "traddr": "10.0.0.1", 00:18:03.086 "trsvcid": "53712" 00:18:03.086 }, 00:18:03.086 "auth": { 00:18:03.086 "state": "completed", 00:18:03.086 "digest": "sha256", 00:18:03.086 "dhgroup": "ffdhe2048" 00:18:03.086 } 00:18:03.086 } 00:18:03.086 ]' 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.086 00:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.655 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:04.223 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.224 00:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.792 00:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.359 00:18:05.359 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.359 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.359 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.617 { 00:18:05.617 "cntlid": 15, 00:18:05.617 "qid": 0, 00:18:05.617 "state": "enabled", 00:18:05.617 "thread": "nvmf_tgt_poll_group_000", 00:18:05.617 "listen_address": { 00:18:05.617 "trtype": "TCP", 00:18:05.617 "adrfam": "IPv4", 00:18:05.617 "traddr": "10.0.0.2", 00:18:05.617 "trsvcid": "4420" 00:18:05.617 }, 00:18:05.617 "peer_address": { 00:18:05.617 "trtype": "TCP", 00:18:05.617 "adrfam": "IPv4", 00:18:05.617 "traddr": "10.0.0.1", 00:18:05.617 "trsvcid": "35314" 00:18:05.617 }, 00:18:05.617 "auth": { 00:18:05.617 "state": "completed", 00:18:05.617 "digest": "sha256", 00:18:05.617 "dhgroup": "ffdhe2048" 00:18:05.617 } 00:18:05.617 } 00:18:05.617 ]' 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.617 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.875 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.875 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.875 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.134 00:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.702 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.961 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.221 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.222 00:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.481 00:18:07.481 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.481 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.481 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.740 { 00:18:07.740 "cntlid": 17, 00:18:07.740 "qid": 0, 00:18:07.740 "state": "enabled", 00:18:07.740 "thread": "nvmf_tgt_poll_group_000", 00:18:07.740 "listen_address": { 00:18:07.740 "trtype": "TCP", 00:18:07.740 "adrfam": "IPv4", 00:18:07.740 "traddr": "10.0.0.2", 00:18:07.740 "trsvcid": "4420" 00:18:07.740 }, 00:18:07.740 "peer_address": { 00:18:07.740 "trtype": "TCP", 00:18:07.740 "adrfam": "IPv4", 00:18:07.740 "traddr": "10.0.0.1", 00:18:07.740 "trsvcid": "35348" 00:18:07.740 }, 00:18:07.740 "auth": { 00:18:07.740 "state": "completed", 00:18:07.740 "digest": "sha256", 00:18:07.740 "dhgroup": "ffdhe3072" 00:18:07.740 } 00:18:07.740 } 00:18:07.740 ]' 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.740 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.999 00:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.938 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.197 00:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.806 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.806 00:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.080 { 00:18:10.080 "cntlid": 19, 00:18:10.080 "qid": 0, 00:18:10.080 "state": "enabled", 00:18:10.080 "thread": "nvmf_tgt_poll_group_000", 00:18:10.080 "listen_address": { 00:18:10.080 "trtype": "TCP", 00:18:10.080 "adrfam": "IPv4", 00:18:10.080 "traddr": "10.0.0.2", 00:18:10.080 "trsvcid": "4420" 00:18:10.080 }, 00:18:10.080 "peer_address": { 00:18:10.080 "trtype": "TCP", 00:18:10.080 "adrfam": "IPv4", 00:18:10.080 "traddr": "10.0.0.1", 00:18:10.080 "trsvcid": "35374" 00:18:10.080 }, 00:18:10.080 "auth": { 00:18:10.080 "state": "completed", 00:18:10.080 "digest": "sha256", 00:18:10.080 "dhgroup": "ffdhe3072" 00:18:10.080 } 00:18:10.080 } 00:18:10.080 ]' 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.080 00:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.340 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.277 00:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.277 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.846 00:18:11.846 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.846 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.846 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.106 { 00:18:12.106 "cntlid": 21, 00:18:12.106 "qid": 0, 00:18:12.106 "state": "enabled", 00:18:12.106 "thread": "nvmf_tgt_poll_group_000", 00:18:12.106 "listen_address": { 00:18:12.106 "trtype": "TCP", 00:18:12.106 "adrfam": "IPv4", 00:18:12.106 "traddr": "10.0.0.2", 00:18:12.106 "trsvcid": "4420" 00:18:12.106 }, 00:18:12.106 "peer_address": { 00:18:12.106 "trtype": "TCP", 00:18:12.106 "adrfam": "IPv4", 00:18:12.106 "traddr": "10.0.0.1", 00:18:12.106 "trsvcid": "35412" 00:18:12.106 }, 00:18:12.106 "auth": { 00:18:12.106 "state": "completed", 00:18:12.106 "digest": "sha256", 00:18:12.106 "dhgroup": "ffdhe3072" 00:18:12.106 } 00:18:12.106 } 00:18:12.106 ]' 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.106 00:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.365 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.303 00:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.562 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.822 00:18:13.822 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.822 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.822 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.081 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.081 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.082 { 00:18:14.082 "cntlid": 23, 00:18:14.082 "qid": 0, 00:18:14.082 "state": "enabled", 00:18:14.082 "thread": "nvmf_tgt_poll_group_000", 00:18:14.082 "listen_address": { 00:18:14.082 "trtype": "TCP", 00:18:14.082 "adrfam": "IPv4", 00:18:14.082 "traddr": "10.0.0.2", 00:18:14.082 "trsvcid": "4420" 00:18:14.082 }, 00:18:14.082 "peer_address": { 00:18:14.082 "trtype": "TCP", 00:18:14.082 "adrfam": "IPv4", 00:18:14.082 "traddr": "10.0.0.1", 00:18:14.082 "trsvcid": "35446" 00:18:14.082 }, 00:18:14.082 "auth": { 00:18:14.082 "state": "completed", 00:18:14.082 "digest": "sha256", 00:18:14.082 "dhgroup": "ffdhe3072" 00:18:14.082 } 00:18:14.082 } 00:18:14.082 ]' 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.082 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.341 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.341 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.341 00:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.599 00:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:15.166 00:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.166 00:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:15.166 00:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.166 00:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.166 00:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.425 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.425 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.425 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.425 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.684 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.252 00:18:16.252 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.252 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.252 00:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.510 { 00:18:16.510 "cntlid": 25, 00:18:16.510 "qid": 0, 00:18:16.510 "state": "enabled", 00:18:16.510 "thread": "nvmf_tgt_poll_group_000", 00:18:16.510 "listen_address": { 00:18:16.510 "trtype": "TCP", 00:18:16.510 "adrfam": "IPv4", 00:18:16.510 "traddr": "10.0.0.2", 00:18:16.510 "trsvcid": "4420" 00:18:16.510 }, 00:18:16.510 "peer_address": { 00:18:16.510 "trtype": "TCP", 00:18:16.510 "adrfam": "IPv4", 00:18:16.510 "traddr": "10.0.0.1", 00:18:16.510 "trsvcid": "58428" 00:18:16.510 }, 00:18:16.510 "auth": { 00:18:16.510 "state": "completed", 00:18:16.510 "digest": "sha256", 00:18:16.510 "dhgroup": "ffdhe4096" 00:18:16.510 } 00:18:16.510 } 00:18:16.510 ]' 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.510 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.769 00:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.703 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.962 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.219 00:18:18.219 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.219 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.219 00:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.476 { 00:18:18.476 "cntlid": 27, 00:18:18.476 "qid": 0, 00:18:18.476 "state": "enabled", 00:18:18.476 "thread": "nvmf_tgt_poll_group_000", 00:18:18.476 "listen_address": { 00:18:18.476 "trtype": "TCP", 00:18:18.476 "adrfam": "IPv4", 00:18:18.476 "traddr": "10.0.0.2", 00:18:18.476 "trsvcid": "4420" 00:18:18.476 }, 00:18:18.476 "peer_address": { 00:18:18.476 "trtype": "TCP", 00:18:18.476 "adrfam": "IPv4", 00:18:18.476 "traddr": "10.0.0.1", 00:18:18.476 "trsvcid": "58444" 00:18:18.476 }, 00:18:18.476 "auth": { 00:18:18.476 "state": "completed", 00:18:18.476 "digest": "sha256", 00:18:18.476 "dhgroup": "ffdhe4096" 00:18:18.476 } 00:18:18.476 } 00:18:18.476 ]' 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.476 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.733 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.733 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.733 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.733 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.733 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.991 00:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.924 00:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.490 00:18:20.490 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.490 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.490 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.749 { 00:18:20.749 "cntlid": 29, 00:18:20.749 "qid": 0, 00:18:20.749 "state": "enabled", 00:18:20.749 "thread": "nvmf_tgt_poll_group_000", 00:18:20.749 "listen_address": { 00:18:20.749 "trtype": "TCP", 00:18:20.749 "adrfam": "IPv4", 00:18:20.749 "traddr": "10.0.0.2", 00:18:20.749 "trsvcid": "4420" 00:18:20.749 }, 00:18:20.749 "peer_address": { 00:18:20.749 "trtype": "TCP", 00:18:20.749 "adrfam": "IPv4", 00:18:20.749 "traddr": "10.0.0.1", 00:18:20.749 "trsvcid": "58466" 00:18:20.749 }, 00:18:20.749 "auth": { 00:18:20.749 "state": "completed", 00:18:20.749 "digest": "sha256", 00:18:20.749 "dhgroup": "ffdhe4096" 00:18:20.749 } 00:18:20.749 } 00:18:20.749 ]' 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.749 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.007 00:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.943 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.202 00:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.461 00:18:22.461 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.461 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.461 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.720 { 00:18:22.720 "cntlid": 31, 00:18:22.720 "qid": 0, 00:18:22.720 "state": "enabled", 00:18:22.720 "thread": "nvmf_tgt_poll_group_000", 00:18:22.720 "listen_address": { 00:18:22.720 "trtype": "TCP", 00:18:22.720 "adrfam": "IPv4", 00:18:22.720 "traddr": "10.0.0.2", 00:18:22.720 "trsvcid": "4420" 00:18:22.720 }, 00:18:22.720 "peer_address": { 00:18:22.720 "trtype": "TCP", 00:18:22.720 "adrfam": "IPv4", 00:18:22.720 "traddr": "10.0.0.1", 00:18:22.720 "trsvcid": "58500" 00:18:22.720 }, 00:18:22.720 "auth": { 00:18:22.720 "state": "completed", 00:18:22.720 "digest": "sha256", 00:18:22.720 "dhgroup": "ffdhe4096" 00:18:22.720 } 00:18:22.720 } 00:18:22.720 ]' 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.720 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.979 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.979 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.979 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.237 00:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:23.804 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.804 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:23.804 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.804 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.063 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.322 00:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.322 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.322 00:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.580 00:18:24.580 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.580 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.580 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.839 { 00:18:24.839 "cntlid": 33, 00:18:24.839 "qid": 0, 00:18:24.839 "state": "enabled", 00:18:24.839 "thread": "nvmf_tgt_poll_group_000", 00:18:24.839 "listen_address": { 00:18:24.839 "trtype": "TCP", 00:18:24.839 "adrfam": "IPv4", 00:18:24.839 "traddr": "10.0.0.2", 00:18:24.839 "trsvcid": "4420" 00:18:24.839 }, 00:18:24.839 "peer_address": { 00:18:24.839 "trtype": "TCP", 00:18:24.839 "adrfam": "IPv4", 00:18:24.839 "traddr": "10.0.0.1", 00:18:24.839 "trsvcid": "49884" 00:18:24.839 }, 00:18:24.839 "auth": { 00:18:24.839 "state": "completed", 00:18:24.839 "digest": "sha256", 00:18:24.839 "dhgroup": "ffdhe6144" 00:18:24.839 } 00:18:24.839 } 00:18:24.839 ]' 00:18:24.839 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.098 00:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.356 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.294 00:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.553 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.812 00:18:26.812 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.812 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.812 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.071 { 00:18:27.071 "cntlid": 35, 00:18:27.071 "qid": 0, 00:18:27.071 "state": "enabled", 00:18:27.071 "thread": "nvmf_tgt_poll_group_000", 00:18:27.071 "listen_address": { 00:18:27.071 "trtype": "TCP", 00:18:27.071 "adrfam": "IPv4", 00:18:27.071 "traddr": "10.0.0.2", 00:18:27.071 "trsvcid": "4420" 00:18:27.071 }, 00:18:27.071 "peer_address": { 00:18:27.071 "trtype": "TCP", 00:18:27.071 "adrfam": "IPv4", 00:18:27.071 "traddr": "10.0.0.1", 00:18:27.071 "trsvcid": "49904" 00:18:27.071 }, 00:18:27.071 "auth": { 00:18:27.071 "state": "completed", 00:18:27.071 "digest": "sha256", 00:18:27.071 "dhgroup": "ffdhe6144" 00:18:27.071 } 00:18:27.071 } 00:18:27.071 ]' 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.071 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.330 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.330 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.330 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.330 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.330 00:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.898 00:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.466 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.726 00:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.294 00:18:29.294 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.294 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.294 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.553 { 00:18:29.553 "cntlid": 37, 00:18:29.553 "qid": 0, 00:18:29.553 "state": "enabled", 00:18:29.553 "thread": "nvmf_tgt_poll_group_000", 00:18:29.553 "listen_address": { 00:18:29.553 "trtype": "TCP", 00:18:29.553 "adrfam": "IPv4", 00:18:29.553 "traddr": "10.0.0.2", 00:18:29.553 "trsvcid": "4420" 00:18:29.553 }, 00:18:29.553 "peer_address": { 00:18:29.553 "trtype": "TCP", 00:18:29.553 "adrfam": "IPv4", 00:18:29.553 "traddr": "10.0.0.1", 00:18:29.553 "trsvcid": "49942" 00:18:29.553 }, 00:18:29.553 "auth": { 00:18:29.553 "state": "completed", 00:18:29.553 "digest": "sha256", 00:18:29.553 "dhgroup": "ffdhe6144" 00:18:29.553 } 00:18:29.553 } 00:18:29.553 ]' 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.553 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.812 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.812 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.812 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.812 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.812 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.071 00:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.007 00:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.266 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.834 00:18:31.834 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.834 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.834 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.093 { 00:18:32.093 "cntlid": 39, 00:18:32.093 "qid": 0, 00:18:32.093 "state": "enabled", 00:18:32.093 "thread": "nvmf_tgt_poll_group_000", 00:18:32.093 "listen_address": { 00:18:32.093 "trtype": "TCP", 00:18:32.093 "adrfam": "IPv4", 00:18:32.093 "traddr": "10.0.0.2", 00:18:32.093 "trsvcid": "4420" 00:18:32.093 }, 00:18:32.093 "peer_address": { 00:18:32.093 "trtype": "TCP", 00:18:32.093 "adrfam": "IPv4", 00:18:32.093 "traddr": "10.0.0.1", 00:18:32.093 "trsvcid": "49962" 00:18:32.093 }, 00:18:32.093 "auth": { 00:18:32.093 "state": "completed", 00:18:32.093 "digest": "sha256", 00:18:32.093 "dhgroup": "ffdhe6144" 00:18:32.093 } 00:18:32.093 } 00:18:32.093 ]' 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.093 00:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.352 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.287 00:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.547 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.116 00:18:34.116 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.116 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.116 00:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.375 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.375 { 00:18:34.375 "cntlid": 41, 00:18:34.376 "qid": 0, 00:18:34.376 "state": "enabled", 00:18:34.376 "thread": "nvmf_tgt_poll_group_000", 00:18:34.376 "listen_address": { 00:18:34.376 "trtype": "TCP", 00:18:34.376 "adrfam": "IPv4", 00:18:34.376 "traddr": "10.0.0.2", 00:18:34.376 "trsvcid": "4420" 00:18:34.376 }, 00:18:34.376 "peer_address": { 00:18:34.376 "trtype": "TCP", 00:18:34.376 "adrfam": "IPv4", 00:18:34.376 "traddr": "10.0.0.1", 00:18:34.376 "trsvcid": "49986" 00:18:34.376 }, 00:18:34.376 "auth": { 00:18:34.376 "state": "completed", 00:18:34.376 "digest": "sha256", 00:18:34.376 "dhgroup": "ffdhe8192" 00:18:34.376 } 00:18:34.376 } 00:18:34.376 ]' 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.635 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.894 00:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.831 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.398 00:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.332 00:18:37.332 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.332 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.332 00:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.590 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.591 { 00:18:37.591 "cntlid": 43, 00:18:37.591 "qid": 0, 00:18:37.591 "state": "enabled", 00:18:37.591 "thread": "nvmf_tgt_poll_group_000", 00:18:37.591 "listen_address": { 00:18:37.591 "trtype": "TCP", 00:18:37.591 "adrfam": "IPv4", 00:18:37.591 "traddr": "10.0.0.2", 00:18:37.591 "trsvcid": "4420" 00:18:37.591 }, 00:18:37.591 "peer_address": { 00:18:37.591 "trtype": "TCP", 00:18:37.591 "adrfam": "IPv4", 00:18:37.591 "traddr": "10.0.0.1", 00:18:37.591 "trsvcid": "50634" 00:18:37.591 }, 00:18:37.591 "auth": { 00:18:37.591 "state": "completed", 00:18:37.591 "digest": "sha256", 00:18:37.591 "dhgroup": "ffdhe8192" 00:18:37.591 } 00:18:37.591 } 00:18:37.591 ]' 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.591 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.848 00:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.783 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.041 00:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.631 00:18:39.631 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.631 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.631 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.923 { 00:18:39.923 "cntlid": 45, 00:18:39.923 "qid": 0, 00:18:39.923 "state": "enabled", 00:18:39.923 "thread": "nvmf_tgt_poll_group_000", 00:18:39.923 "listen_address": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "adrfam": "IPv4", 00:18:39.923 "traddr": "10.0.0.2", 00:18:39.923 "trsvcid": "4420" 00:18:39.923 }, 00:18:39.923 "peer_address": { 00:18:39.923 "trtype": "TCP", 00:18:39.923 "adrfam": "IPv4", 00:18:39.923 "traddr": "10.0.0.1", 00:18:39.923 "trsvcid": "50648" 00:18:39.923 }, 00:18:39.923 "auth": { 00:18:39.923 "state": "completed", 00:18:39.923 "digest": "sha256", 00:18:39.923 "dhgroup": "ffdhe8192" 00:18:39.923 } 00:18:39.923 } 00:18:39.923 ]' 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.923 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.182 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.182 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.182 00:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.442 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:41.010 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.269 00:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.528 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.787 00:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.787 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.787 00:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.356 00:18:42.356 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.356 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.356 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.615 { 00:18:42.615 "cntlid": 47, 00:18:42.615 "qid": 0, 00:18:42.615 "state": "enabled", 00:18:42.615 "thread": "nvmf_tgt_poll_group_000", 00:18:42.615 "listen_address": { 00:18:42.615 "trtype": "TCP", 00:18:42.615 "adrfam": "IPv4", 00:18:42.615 "traddr": "10.0.0.2", 00:18:42.615 "trsvcid": "4420" 00:18:42.615 }, 00:18:42.615 "peer_address": { 00:18:42.615 "trtype": "TCP", 00:18:42.615 "adrfam": "IPv4", 00:18:42.615 "traddr": "10.0.0.1", 00:18:42.615 "trsvcid": "50672" 00:18:42.615 }, 00:18:42.615 "auth": { 00:18:42.615 "state": "completed", 00:18:42.615 "digest": "sha256", 00:18:42.615 "dhgroup": "ffdhe8192" 00:18:42.615 } 00:18:42.615 } 00:18:42.615 ]' 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.615 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.874 00:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.251 00:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.817 00:18:44.817 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.817 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.817 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.076 { 00:18:45.076 "cntlid": 49, 00:18:45.076 "qid": 0, 00:18:45.076 "state": "enabled", 00:18:45.076 "thread": "nvmf_tgt_poll_group_000", 00:18:45.076 "listen_address": { 00:18:45.076 "trtype": "TCP", 00:18:45.076 "adrfam": "IPv4", 00:18:45.076 "traddr": "10.0.0.2", 00:18:45.076 "trsvcid": "4420" 00:18:45.076 }, 00:18:45.076 "peer_address": { 00:18:45.076 "trtype": "TCP", 00:18:45.076 "adrfam": "IPv4", 00:18:45.076 "traddr": "10.0.0.1", 00:18:45.076 "trsvcid": "55680" 00:18:45.076 }, 00:18:45.076 "auth": { 00:18:45.076 "state": "completed", 00:18:45.076 "digest": "sha384", 00:18:45.076 "dhgroup": "null" 00:18:45.076 } 00:18:45.076 } 00:18:45.076 ]' 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.076 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.335 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.335 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.335 00:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.593 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.160 00:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.419 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.988 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.988 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.246 { 00:18:47.246 "cntlid": 51, 00:18:47.246 "qid": 0, 00:18:47.246 "state": "enabled", 00:18:47.246 "thread": "nvmf_tgt_poll_group_000", 00:18:47.246 "listen_address": { 00:18:47.246 "trtype": "TCP", 00:18:47.246 "adrfam": "IPv4", 00:18:47.246 "traddr": "10.0.0.2", 00:18:47.246 "trsvcid": "4420" 00:18:47.246 }, 00:18:47.246 "peer_address": { 00:18:47.246 "trtype": "TCP", 00:18:47.246 "adrfam": "IPv4", 00:18:47.246 "traddr": "10.0.0.1", 00:18:47.246 "trsvcid": "55708" 00:18:47.246 }, 00:18:47.246 "auth": { 00:18:47.246 "state": "completed", 00:18:47.246 "digest": "sha384", 00:18:47.246 "dhgroup": "null" 00:18:47.246 } 00:18:47.246 } 00:18:47.246 ]' 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.246 00:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.504 00:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.438 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.697 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.955 00:18:48.955 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.955 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.955 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.212 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.212 { 00:18:49.212 "cntlid": 53, 00:18:49.212 "qid": 0, 00:18:49.212 "state": "enabled", 00:18:49.212 "thread": "nvmf_tgt_poll_group_000", 00:18:49.213 "listen_address": { 00:18:49.213 "trtype": "TCP", 00:18:49.213 "adrfam": "IPv4", 00:18:49.213 "traddr": "10.0.0.2", 00:18:49.213 "trsvcid": "4420" 00:18:49.213 }, 00:18:49.213 "peer_address": { 00:18:49.213 "trtype": "TCP", 00:18:49.213 "adrfam": "IPv4", 00:18:49.213 "traddr": "10.0.0.1", 00:18:49.213 "trsvcid": "55728" 00:18:49.213 }, 00:18:49.213 "auth": { 00:18:49.213 "state": "completed", 00:18:49.213 "digest": "sha384", 00:18:49.213 "dhgroup": "null" 00:18:49.213 } 00:18:49.213 } 00:18:49.213 ]' 00:18:49.213 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.213 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.213 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.213 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.213 00:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.213 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.213 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.213 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.471 00:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.405 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.664 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.923 00:18:50.923 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.923 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.923 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.181 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.181 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.181 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.181 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.182 00:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.182 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.182 { 00:18:51.182 "cntlid": 55, 00:18:51.182 "qid": 0, 00:18:51.182 "state": "enabled", 00:18:51.182 "thread": "nvmf_tgt_poll_group_000", 00:18:51.182 "listen_address": { 00:18:51.182 "trtype": "TCP", 00:18:51.182 "adrfam": "IPv4", 00:18:51.182 "traddr": "10.0.0.2", 00:18:51.182 "trsvcid": "4420" 00:18:51.182 }, 00:18:51.182 "peer_address": { 00:18:51.182 "trtype": "TCP", 00:18:51.182 "adrfam": "IPv4", 00:18:51.182 "traddr": "10.0.0.1", 00:18:51.182 "trsvcid": "55752" 00:18:51.182 }, 00:18:51.182 "auth": { 00:18:51.182 "state": "completed", 00:18:51.182 "digest": "sha384", 00:18:51.182 "dhgroup": "null" 00:18:51.182 } 00:18:51.182 } 00:18:51.182 ]' 00:18:51.182 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.182 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.182 00:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.441 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:51.441 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.441 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.441 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.441 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.699 00:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.643 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.901 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:52.901 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.902 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.160 00:18:53.160 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.160 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.160 00:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.419 { 00:18:53.419 "cntlid": 57, 00:18:53.419 "qid": 0, 00:18:53.419 "state": "enabled", 00:18:53.419 "thread": "nvmf_tgt_poll_group_000", 00:18:53.419 "listen_address": { 00:18:53.419 "trtype": "TCP", 00:18:53.419 "adrfam": "IPv4", 00:18:53.419 "traddr": "10.0.0.2", 00:18:53.419 "trsvcid": "4420" 00:18:53.419 }, 00:18:53.419 "peer_address": { 00:18:53.419 "trtype": "TCP", 00:18:53.419 "adrfam": "IPv4", 00:18:53.419 "traddr": "10.0.0.1", 00:18:53.419 "trsvcid": "55788" 00:18:53.419 }, 00:18:53.419 "auth": { 00:18:53.419 "state": "completed", 00:18:53.419 "digest": "sha384", 00:18:53.419 "dhgroup": "ffdhe2048" 00:18:53.419 } 00:18:53.419 } 00:18:53.419 ]' 00:18:53.419 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.678 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.937 00:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.315 00:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.315 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.572 00:18:55.572 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.572 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.572 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.829 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.829 { 00:18:55.829 "cntlid": 59, 00:18:55.829 "qid": 0, 00:18:55.829 "state": "enabled", 00:18:55.829 "thread": "nvmf_tgt_poll_group_000", 00:18:55.829 "listen_address": { 00:18:55.829 "trtype": "TCP", 00:18:55.829 "adrfam": "IPv4", 00:18:55.829 "traddr": "10.0.0.2", 00:18:55.829 "trsvcid": "4420" 00:18:55.829 }, 00:18:55.829 "peer_address": { 00:18:55.829 "trtype": "TCP", 00:18:55.829 "adrfam": "IPv4", 00:18:55.829 "traddr": "10.0.0.1", 00:18:55.829 "trsvcid": "53996" 00:18:55.829 }, 00:18:55.829 "auth": { 00:18:55.829 "state": "completed", 00:18:55.829 "digest": "sha384", 00:18:55.830 "dhgroup": "ffdhe2048" 00:18:55.830 } 00:18:55.830 } 00:18:55.830 ]' 00:18:55.830 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.086 00:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.343 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.276 00:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.533 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.790 00:18:57.790 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.790 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.790 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.046 { 00:18:58.046 "cntlid": 61, 00:18:58.046 "qid": 0, 00:18:58.046 "state": "enabled", 00:18:58.046 "thread": "nvmf_tgt_poll_group_000", 00:18:58.046 "listen_address": { 00:18:58.046 "trtype": "TCP", 00:18:58.046 "adrfam": "IPv4", 00:18:58.046 "traddr": "10.0.0.2", 00:18:58.046 "trsvcid": "4420" 00:18:58.046 }, 00:18:58.046 "peer_address": { 00:18:58.046 "trtype": "TCP", 00:18:58.046 "adrfam": "IPv4", 00:18:58.046 "traddr": "10.0.0.1", 00:18:58.046 "trsvcid": "54022" 00:18:58.046 }, 00:18:58.046 "auth": { 00:18:58.046 "state": "completed", 00:18:58.046 "digest": "sha384", 00:18:58.046 "dhgroup": "ffdhe2048" 00:18:58.046 } 00:18:58.046 } 00:18:58.046 ]' 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.046 00:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.303 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.239 00:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.497 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.755 00:18:59.755 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.755 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.755 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.014 { 00:19:00.014 "cntlid": 63, 00:19:00.014 "qid": 0, 00:19:00.014 "state": "enabled", 00:19:00.014 "thread": "nvmf_tgt_poll_group_000", 00:19:00.014 "listen_address": { 00:19:00.014 "trtype": "TCP", 00:19:00.014 "adrfam": "IPv4", 00:19:00.014 "traddr": "10.0.0.2", 00:19:00.014 "trsvcid": "4420" 00:19:00.014 }, 00:19:00.014 "peer_address": { 00:19:00.014 "trtype": "TCP", 00:19:00.014 "adrfam": "IPv4", 00:19:00.014 "traddr": "10.0.0.1", 00:19:00.014 "trsvcid": "54044" 00:19:00.014 }, 00:19:00.014 "auth": { 00:19:00.014 "state": "completed", 00:19:00.014 "digest": "sha384", 00:19:00.014 "dhgroup": "ffdhe2048" 00:19:00.014 } 00:19:00.014 } 00:19:00.014 ]' 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.014 00:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.272 00:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.664 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.923 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.183 00:19:02.183 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.183 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.183 00:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.442 { 00:19:02.442 "cntlid": 65, 00:19:02.442 "qid": 0, 00:19:02.442 "state": "enabled", 00:19:02.442 "thread": "nvmf_tgt_poll_group_000", 00:19:02.442 "listen_address": { 00:19:02.442 "trtype": "TCP", 00:19:02.442 "adrfam": "IPv4", 00:19:02.442 "traddr": "10.0.0.2", 00:19:02.442 "trsvcid": "4420" 00:19:02.442 }, 00:19:02.442 "peer_address": { 00:19:02.442 "trtype": "TCP", 00:19:02.442 "adrfam": "IPv4", 00:19:02.442 "traddr": "10.0.0.1", 00:19:02.442 "trsvcid": "54074" 00:19:02.442 }, 00:19:02.442 "auth": { 00:19:02.442 "state": "completed", 00:19:02.442 "digest": "sha384", 00:19:02.442 "dhgroup": "ffdhe3072" 00:19:02.442 } 00:19:02.442 } 00:19:02.442 ]' 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.442 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.740 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.740 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.740 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.740 00:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.677 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.936 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.194 00:19:04.194 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.194 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.194 00:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.452 { 00:19:04.452 "cntlid": 67, 00:19:04.452 "qid": 0, 00:19:04.452 "state": "enabled", 00:19:04.452 "thread": "nvmf_tgt_poll_group_000", 00:19:04.452 "listen_address": { 00:19:04.452 "trtype": "TCP", 00:19:04.452 "adrfam": "IPv4", 00:19:04.452 "traddr": "10.0.0.2", 00:19:04.452 "trsvcid": "4420" 00:19:04.452 }, 00:19:04.452 "peer_address": { 00:19:04.452 "trtype": "TCP", 00:19:04.452 "adrfam": "IPv4", 00:19:04.452 "traddr": "10.0.0.1", 00:19:04.452 "trsvcid": "43334" 00:19:04.452 }, 00:19:04.452 "auth": { 00:19:04.452 "state": "completed", 00:19:04.452 "digest": "sha384", 00:19:04.452 "dhgroup": "ffdhe3072" 00:19:04.452 } 00:19:04.452 } 00:19:04.452 ]' 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.452 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.711 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.711 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.711 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.970 00:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:06.366 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.367 00:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.367 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.625 00:19:06.625 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.625 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.625 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.883 { 00:19:06.883 "cntlid": 69, 00:19:06.883 "qid": 0, 00:19:06.883 "state": "enabled", 00:19:06.883 "thread": "nvmf_tgt_poll_group_000", 00:19:06.883 "listen_address": { 00:19:06.883 "trtype": "TCP", 00:19:06.883 "adrfam": "IPv4", 00:19:06.883 "traddr": "10.0.0.2", 00:19:06.883 "trsvcid": "4420" 00:19:06.883 }, 00:19:06.883 "peer_address": { 00:19:06.883 "trtype": "TCP", 00:19:06.883 "adrfam": "IPv4", 00:19:06.883 "traddr": "10.0.0.1", 00:19:06.883 "trsvcid": "43378" 00:19:06.883 }, 00:19:06.883 "auth": { 00:19:06.883 "state": "completed", 00:19:06.883 "digest": "sha384", 00:19:06.883 "dhgroup": "ffdhe3072" 00:19:06.883 } 00:19:06.883 } 00:19:06.883 ]' 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.883 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.141 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.141 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.141 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.141 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.141 00:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.398 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:08.329 00:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:08.329 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:08.329 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.329 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.330 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.896 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.896 { 00:19:08.896 "cntlid": 71, 00:19:08.896 "qid": 0, 00:19:08.896 "state": "enabled", 00:19:08.896 "thread": "nvmf_tgt_poll_group_000", 00:19:08.896 "listen_address": { 00:19:08.896 "trtype": "TCP", 00:19:08.896 "adrfam": "IPv4", 00:19:08.896 "traddr": "10.0.0.2", 00:19:08.896 "trsvcid": "4420" 00:19:08.896 }, 00:19:08.896 "peer_address": { 00:19:08.896 "trtype": "TCP", 00:19:08.896 "adrfam": "IPv4", 00:19:08.896 "traddr": "10.0.0.1", 00:19:08.896 "trsvcid": "43406" 00:19:08.896 }, 00:19:08.896 "auth": { 00:19:08.896 "state": "completed", 00:19:08.896 "digest": "sha384", 00:19:08.896 "dhgroup": "ffdhe3072" 00:19:08.896 } 00:19:08.896 } 00:19:08.896 ]' 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.154 00:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.411 00:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.408 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.687 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.975 00:19:10.975 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.975 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.975 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.235 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.235 00:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.235 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.235 00:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.235 00:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.235 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.235 { 00:19:11.235 "cntlid": 73, 00:19:11.235 "qid": 0, 00:19:11.235 "state": "enabled", 00:19:11.235 "thread": "nvmf_tgt_poll_group_000", 00:19:11.235 "listen_address": { 00:19:11.235 "trtype": "TCP", 00:19:11.235 "adrfam": "IPv4", 00:19:11.235 "traddr": "10.0.0.2", 00:19:11.235 "trsvcid": "4420" 00:19:11.235 }, 00:19:11.235 "peer_address": { 00:19:11.235 "trtype": "TCP", 00:19:11.235 "adrfam": "IPv4", 00:19:11.235 "traddr": "10.0.0.1", 00:19:11.235 "trsvcid": "43422" 00:19:11.235 }, 00:19:11.235 "auth": { 00:19:11.235 "state": "completed", 00:19:11.235 "digest": "sha384", 00:19:11.235 "dhgroup": "ffdhe4096" 00:19:11.235 } 00:19:11.235 } 00:19:11.235 ]' 00:19:11.235 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.235 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.235 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.494 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.494 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.494 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.494 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.494 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.754 00:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:12.323 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.582 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.841 00:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.841 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.841 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.100 00:19:13.100 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.100 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.100 00:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.359 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.360 { 00:19:13.360 "cntlid": 75, 00:19:13.360 "qid": 0, 00:19:13.360 "state": "enabled", 00:19:13.360 "thread": "nvmf_tgt_poll_group_000", 00:19:13.360 "listen_address": { 00:19:13.360 "trtype": "TCP", 00:19:13.360 "adrfam": "IPv4", 00:19:13.360 "traddr": "10.0.0.2", 00:19:13.360 "trsvcid": "4420" 00:19:13.360 }, 00:19:13.360 "peer_address": { 00:19:13.360 "trtype": "TCP", 00:19:13.360 "adrfam": "IPv4", 00:19:13.360 "traddr": "10.0.0.1", 00:19:13.360 "trsvcid": "43454" 00:19:13.360 }, 00:19:13.360 "auth": { 00:19:13.360 "state": "completed", 00:19:13.360 "digest": "sha384", 00:19:13.360 "dhgroup": "ffdhe4096" 00:19:13.360 } 00:19:13.360 } 00:19:13.360 ]' 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.360 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.619 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.619 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.619 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.878 00:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:14.814 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:15.072 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.330 00:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.590 00:19:15.590 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.590 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.590 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.847 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.847 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.847 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.847 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.848 { 00:19:15.848 "cntlid": 77, 00:19:15.848 "qid": 0, 00:19:15.848 "state": "enabled", 00:19:15.848 "thread": "nvmf_tgt_poll_group_000", 00:19:15.848 "listen_address": { 00:19:15.848 "trtype": "TCP", 00:19:15.848 "adrfam": "IPv4", 00:19:15.848 "traddr": "10.0.0.2", 00:19:15.848 "trsvcid": "4420" 00:19:15.848 }, 00:19:15.848 "peer_address": { 00:19:15.848 "trtype": "TCP", 00:19:15.848 "adrfam": "IPv4", 00:19:15.848 "traddr": "10.0.0.1", 00:19:15.848 "trsvcid": "48356" 00:19:15.848 }, 00:19:15.848 "auth": { 00:19:15.848 "state": "completed", 00:19:15.848 "digest": "sha384", 00:19:15.848 "dhgroup": "ffdhe4096" 00:19:15.848 } 00:19:15.848 } 00:19:15.848 ]' 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.848 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.106 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.106 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.106 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.364 00:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:16.931 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.190 00:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.448 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.707 00:19:17.707 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.707 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.707 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.964 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.964 { 00:19:17.964 "cntlid": 79, 00:19:17.964 "qid": 0, 00:19:17.964 "state": "enabled", 00:19:17.964 "thread": "nvmf_tgt_poll_group_000", 00:19:17.965 "listen_address": { 00:19:17.965 "trtype": "TCP", 00:19:17.965 "adrfam": "IPv4", 00:19:17.965 "traddr": "10.0.0.2", 00:19:17.965 "trsvcid": "4420" 00:19:17.965 }, 00:19:17.965 "peer_address": { 00:19:17.965 "trtype": "TCP", 00:19:17.965 "adrfam": "IPv4", 00:19:17.965 "traddr": "10.0.0.1", 00:19:17.965 "trsvcid": "48368" 00:19:17.965 }, 00:19:17.965 "auth": { 00:19:17.965 "state": "completed", 00:19:17.965 "digest": "sha384", 00:19:17.965 "dhgroup": "ffdhe4096" 00:19:17.965 } 00:19:17.965 } 00:19:17.965 ]' 00:19:17.965 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.965 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.965 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.965 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.965 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.222 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.222 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.222 00:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.479 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:19.055 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.315 00:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.572 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.830 00:19:19.830 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.830 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.830 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.087 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.087 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.087 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.087 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.345 00:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.345 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.345 { 00:19:20.345 "cntlid": 81, 00:19:20.345 "qid": 0, 00:19:20.345 "state": "enabled", 00:19:20.345 "thread": "nvmf_tgt_poll_group_000", 00:19:20.345 "listen_address": { 00:19:20.345 "trtype": "TCP", 00:19:20.345 "adrfam": "IPv4", 00:19:20.345 "traddr": "10.0.0.2", 00:19:20.345 "trsvcid": "4420" 00:19:20.345 }, 00:19:20.345 "peer_address": { 00:19:20.345 "trtype": "TCP", 00:19:20.345 "adrfam": "IPv4", 00:19:20.345 "traddr": "10.0.0.1", 00:19:20.345 "trsvcid": "48392" 00:19:20.345 }, 00:19:20.345 "auth": { 00:19:20.345 "state": "completed", 00:19:20.345 "digest": "sha384", 00:19:20.345 "dhgroup": "ffdhe6144" 00:19:20.345 } 00:19:20.345 } 00:19:20.345 ]' 00:19:20.345 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.345 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.345 00:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.345 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.345 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.345 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.345 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.345 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.910 00:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.846 00:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.414 00:19:22.414 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.414 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.414 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.673 { 00:19:22.673 "cntlid": 83, 00:19:22.673 "qid": 0, 00:19:22.673 "state": "enabled", 00:19:22.673 "thread": "nvmf_tgt_poll_group_000", 00:19:22.673 "listen_address": { 00:19:22.673 "trtype": "TCP", 00:19:22.673 "adrfam": "IPv4", 00:19:22.673 "traddr": "10.0.0.2", 00:19:22.673 "trsvcid": "4420" 00:19:22.673 }, 00:19:22.673 "peer_address": { 00:19:22.673 "trtype": "TCP", 00:19:22.673 "adrfam": "IPv4", 00:19:22.673 "traddr": "10.0.0.1", 00:19:22.673 "trsvcid": "48420" 00:19:22.673 }, 00:19:22.673 "auth": { 00:19:22.673 "state": "completed", 00:19:22.673 "digest": "sha384", 00:19:22.673 "dhgroup": "ffdhe6144" 00:19:22.673 } 00:19:22.673 } 00:19:22.673 ]' 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.673 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.931 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.931 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.931 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.931 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.931 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.189 00:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.125 00:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.691 00:19:24.691 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.691 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.691 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.949 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.949 { 00:19:24.949 "cntlid": 85, 00:19:24.949 "qid": 0, 00:19:24.949 "state": "enabled", 00:19:24.949 "thread": "nvmf_tgt_poll_group_000", 00:19:24.949 "listen_address": { 00:19:24.949 "trtype": "TCP", 00:19:24.949 "adrfam": "IPv4", 00:19:24.949 "traddr": "10.0.0.2", 00:19:24.949 "trsvcid": "4420" 00:19:24.949 }, 00:19:24.949 "peer_address": { 00:19:24.949 "trtype": "TCP", 00:19:24.949 "adrfam": "IPv4", 00:19:24.950 "traddr": "10.0.0.1", 00:19:24.950 "trsvcid": "56108" 00:19:24.950 }, 00:19:24.950 "auth": { 00:19:24.950 "state": "completed", 00:19:24.950 "digest": "sha384", 00:19:24.950 "dhgroup": "ffdhe6144" 00:19:24.950 } 00:19:24.950 } 00:19:24.950 ]' 00:19:24.950 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.950 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.950 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.950 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.950 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.208 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.208 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.208 00:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.466 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.034 00:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.292 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.858 00:19:26.858 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.858 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.858 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.117 { 00:19:27.117 "cntlid": 87, 00:19:27.117 "qid": 0, 00:19:27.117 "state": "enabled", 00:19:27.117 "thread": "nvmf_tgt_poll_group_000", 00:19:27.117 "listen_address": { 00:19:27.117 "trtype": "TCP", 00:19:27.117 "adrfam": "IPv4", 00:19:27.117 "traddr": "10.0.0.2", 00:19:27.117 "trsvcid": "4420" 00:19:27.117 }, 00:19:27.117 "peer_address": { 00:19:27.117 "trtype": "TCP", 00:19:27.117 "adrfam": "IPv4", 00:19:27.117 "traddr": "10.0.0.1", 00:19:27.117 "trsvcid": "56130" 00:19:27.117 }, 00:19:27.117 "auth": { 00:19:27.117 "state": "completed", 00:19:27.117 "digest": "sha384", 00:19:27.117 "dhgroup": "ffdhe6144" 00:19:27.117 } 00:19:27.117 } 00:19:27.117 ]' 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.117 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.375 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.375 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.375 00:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.632 00:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:28.199 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.199 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:28.199 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.199 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.458 00:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.392 00:19:29.392 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.392 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.392 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.651 { 00:19:29.651 "cntlid": 89, 00:19:29.651 "qid": 0, 00:19:29.651 "state": "enabled", 00:19:29.651 "thread": "nvmf_tgt_poll_group_000", 00:19:29.651 "listen_address": { 00:19:29.651 "trtype": "TCP", 00:19:29.651 "adrfam": "IPv4", 00:19:29.651 "traddr": "10.0.0.2", 00:19:29.651 "trsvcid": "4420" 00:19:29.651 }, 00:19:29.651 "peer_address": { 00:19:29.651 "trtype": "TCP", 00:19:29.651 "adrfam": "IPv4", 00:19:29.651 "traddr": "10.0.0.1", 00:19:29.651 "trsvcid": "56148" 00:19:29.651 }, 00:19:29.651 "auth": { 00:19:29.651 "state": "completed", 00:19:29.651 "digest": "sha384", 00:19:29.651 "dhgroup": "ffdhe8192" 00:19:29.651 } 00:19:29.651 } 00:19:29.651 ]' 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.651 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.910 00:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.846 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.105 00:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.043 00:19:32.043 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.043 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.043 00:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.301 { 00:19:32.301 "cntlid": 91, 00:19:32.301 "qid": 0, 00:19:32.301 "state": "enabled", 00:19:32.301 "thread": "nvmf_tgt_poll_group_000", 00:19:32.301 "listen_address": { 00:19:32.301 "trtype": "TCP", 00:19:32.301 "adrfam": "IPv4", 00:19:32.301 "traddr": "10.0.0.2", 00:19:32.301 "trsvcid": "4420" 00:19:32.301 }, 00:19:32.301 "peer_address": { 00:19:32.301 "trtype": "TCP", 00:19:32.301 "adrfam": "IPv4", 00:19:32.301 "traddr": "10.0.0.1", 00:19:32.301 "trsvcid": "56174" 00:19:32.301 }, 00:19:32.301 "auth": { 00:19:32.301 "state": "completed", 00:19:32.301 "digest": "sha384", 00:19:32.301 "dhgroup": "ffdhe8192" 00:19:32.301 } 00:19:32.301 } 00:19:32.301 ]' 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.301 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.559 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.559 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.559 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.817 00:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:33.384 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.643 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.902 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.903 00:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.903 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.903 00:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.467 00:19:34.467 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.467 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.467 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.725 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.725 { 00:19:34.725 "cntlid": 93, 00:19:34.725 "qid": 0, 00:19:34.725 "state": "enabled", 00:19:34.725 "thread": "nvmf_tgt_poll_group_000", 00:19:34.725 "listen_address": { 00:19:34.725 "trtype": "TCP", 00:19:34.725 "adrfam": "IPv4", 00:19:34.726 "traddr": "10.0.0.2", 00:19:34.726 "trsvcid": "4420" 00:19:34.726 }, 00:19:34.726 "peer_address": { 00:19:34.726 "trtype": "TCP", 00:19:34.726 "adrfam": "IPv4", 00:19:34.726 "traddr": "10.0.0.1", 00:19:34.726 "trsvcid": "56198" 00:19:34.726 }, 00:19:34.726 "auth": { 00:19:34.726 "state": "completed", 00:19:34.726 "digest": "sha384", 00:19:34.726 "dhgroup": "ffdhe8192" 00:19:34.726 } 00:19:34.726 } 00:19:34.726 ]' 00:19:34.726 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.726 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.726 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.726 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.726 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.983 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.983 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.983 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.240 00:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:35.804 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.062 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.335 00:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.335 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.335 00:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.936 00:19:36.936 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.936 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.936 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.195 { 00:19:37.195 "cntlid": 95, 00:19:37.195 "qid": 0, 00:19:37.195 "state": "enabled", 00:19:37.195 "thread": "nvmf_tgt_poll_group_000", 00:19:37.195 "listen_address": { 00:19:37.195 "trtype": "TCP", 00:19:37.195 "adrfam": "IPv4", 00:19:37.195 "traddr": "10.0.0.2", 00:19:37.195 "trsvcid": "4420" 00:19:37.195 }, 00:19:37.195 "peer_address": { 00:19:37.195 "trtype": "TCP", 00:19:37.195 "adrfam": "IPv4", 00:19:37.195 "traddr": "10.0.0.1", 00:19:37.195 "trsvcid": "56388" 00:19:37.195 }, 00:19:37.195 "auth": { 00:19:37.195 "state": "completed", 00:19:37.195 "digest": "sha384", 00:19:37.195 "dhgroup": "ffdhe8192" 00:19:37.195 } 00:19:37.195 } 00:19:37.195 ]' 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.195 00:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.195 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.195 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.455 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.455 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.455 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.713 00:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:38.279 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.846 00:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.413 00:19:39.413 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.413 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.413 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.671 { 00:19:39.671 "cntlid": 97, 00:19:39.671 "qid": 0, 00:19:39.671 "state": "enabled", 00:19:39.671 "thread": "nvmf_tgt_poll_group_000", 00:19:39.671 "listen_address": { 00:19:39.671 "trtype": "TCP", 00:19:39.671 "adrfam": "IPv4", 00:19:39.671 "traddr": "10.0.0.2", 00:19:39.671 "trsvcid": "4420" 00:19:39.671 }, 00:19:39.671 "peer_address": { 00:19:39.671 "trtype": "TCP", 00:19:39.671 "adrfam": "IPv4", 00:19:39.671 "traddr": "10.0.0.1", 00:19:39.671 "trsvcid": "56422" 00:19:39.671 }, 00:19:39.671 "auth": { 00:19:39.671 "state": "completed", 00:19:39.671 "digest": "sha512", 00:19:39.671 "dhgroup": "null" 00:19:39.671 } 00:19:39.671 } 00:19:39.671 ]' 00:19:39.671 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.929 00:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.495 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.429 00:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.429 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.687 00:19:41.687 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.687 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.687 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.946 { 00:19:41.946 "cntlid": 99, 00:19:41.946 "qid": 0, 00:19:41.946 "state": "enabled", 00:19:41.946 "thread": "nvmf_tgt_poll_group_000", 00:19:41.946 "listen_address": { 00:19:41.946 "trtype": "TCP", 00:19:41.946 "adrfam": "IPv4", 00:19:41.946 "traddr": "10.0.0.2", 00:19:41.946 "trsvcid": "4420" 00:19:41.946 }, 00:19:41.946 "peer_address": { 00:19:41.946 "trtype": "TCP", 00:19:41.946 "adrfam": "IPv4", 00:19:41.946 "traddr": "10.0.0.1", 00:19:41.946 "trsvcid": "56450" 00:19:41.946 }, 00:19:41.946 "auth": { 00:19:41.946 "state": "completed", 00:19:41.946 "digest": "sha512", 00:19:41.946 "dhgroup": "null" 00:19:41.946 } 00:19:41.946 } 00:19:41.946 ]' 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.946 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.204 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:42.204 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.204 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.204 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.204 00:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.461 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:43.393 00:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.393 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.958 00:19:43.958 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.958 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.958 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.247 { 00:19:44.247 "cntlid": 101, 00:19:44.247 "qid": 0, 00:19:44.247 "state": "enabled", 00:19:44.247 "thread": "nvmf_tgt_poll_group_000", 00:19:44.247 "listen_address": { 00:19:44.247 "trtype": "TCP", 00:19:44.247 "adrfam": "IPv4", 00:19:44.247 "traddr": "10.0.0.2", 00:19:44.247 "trsvcid": "4420" 00:19:44.247 }, 00:19:44.247 "peer_address": { 00:19:44.247 "trtype": "TCP", 00:19:44.247 "adrfam": "IPv4", 00:19:44.247 "traddr": "10.0.0.1", 00:19:44.247 "trsvcid": "56482" 00:19:44.247 }, 00:19:44.247 "auth": { 00:19:44.247 "state": "completed", 00:19:44.247 "digest": "sha512", 00:19:44.247 "dhgroup": "null" 00:19:44.247 } 00:19:44.247 } 00:19:44.247 ]' 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.247 00:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.247 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:44.247 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.247 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.247 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.247 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.505 00:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.437 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.693 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.950 00:19:45.950 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.950 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.950 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.209 { 00:19:46.209 "cntlid": 103, 00:19:46.209 "qid": 0, 00:19:46.209 "state": "enabled", 00:19:46.209 "thread": "nvmf_tgt_poll_group_000", 00:19:46.209 "listen_address": { 00:19:46.209 "trtype": "TCP", 00:19:46.209 "adrfam": "IPv4", 00:19:46.209 "traddr": "10.0.0.2", 00:19:46.209 "trsvcid": "4420" 00:19:46.209 }, 00:19:46.209 "peer_address": { 00:19:46.209 "trtype": "TCP", 00:19:46.209 "adrfam": "IPv4", 00:19:46.209 "traddr": "10.0.0.1", 00:19:46.209 "trsvcid": "33428" 00:19:46.209 }, 00:19:46.209 "auth": { 00:19:46.209 "state": "completed", 00:19:46.209 "digest": "sha512", 00:19:46.209 "dhgroup": "null" 00:19:46.209 } 00:19:46.209 } 00:19:46.209 ]' 00:19:46.209 00:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.209 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.209 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.467 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.467 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.467 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.467 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.467 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.726 00:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.663 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.922 00:19:48.181 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.181 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.181 00:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.440 { 00:19:48.440 "cntlid": 105, 00:19:48.440 "qid": 0, 00:19:48.440 "state": "enabled", 00:19:48.440 "thread": "nvmf_tgt_poll_group_000", 00:19:48.440 "listen_address": { 00:19:48.440 "trtype": "TCP", 00:19:48.440 "adrfam": "IPv4", 00:19:48.440 "traddr": "10.0.0.2", 00:19:48.440 "trsvcid": "4420" 00:19:48.440 }, 00:19:48.440 "peer_address": { 00:19:48.440 "trtype": "TCP", 00:19:48.440 "adrfam": "IPv4", 00:19:48.440 "traddr": "10.0.0.1", 00:19:48.440 "trsvcid": "33458" 00:19:48.440 }, 00:19:48.440 "auth": { 00:19:48.440 "state": "completed", 00:19:48.440 "digest": "sha512", 00:19:48.440 "dhgroup": "ffdhe2048" 00:19:48.440 } 00:19:48.440 } 00:19:48.440 ]' 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.440 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.699 00:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.636 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.895 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.152 00:19:50.152 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.152 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.152 00:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.409 { 00:19:50.409 "cntlid": 107, 00:19:50.409 "qid": 0, 00:19:50.409 "state": "enabled", 00:19:50.409 "thread": "nvmf_tgt_poll_group_000", 00:19:50.409 "listen_address": { 00:19:50.409 "trtype": "TCP", 00:19:50.409 "adrfam": "IPv4", 00:19:50.409 "traddr": "10.0.0.2", 00:19:50.409 "trsvcid": "4420" 00:19:50.409 }, 00:19:50.409 "peer_address": { 00:19:50.409 "trtype": "TCP", 00:19:50.409 "adrfam": "IPv4", 00:19:50.409 "traddr": "10.0.0.1", 00:19:50.409 "trsvcid": "33470" 00:19:50.409 }, 00:19:50.409 "auth": { 00:19:50.409 "state": "completed", 00:19:50.409 "digest": "sha512", 00:19:50.409 "dhgroup": "ffdhe2048" 00:19:50.409 } 00:19:50.409 } 00:19:50.409 ]' 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.409 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.667 00:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.602 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.861 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.119 00:19:52.119 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.119 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.119 00:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.377 { 00:19:52.377 "cntlid": 109, 00:19:52.377 "qid": 0, 00:19:52.377 "state": "enabled", 00:19:52.377 "thread": "nvmf_tgt_poll_group_000", 00:19:52.377 "listen_address": { 00:19:52.377 "trtype": "TCP", 00:19:52.377 "adrfam": "IPv4", 00:19:52.377 "traddr": "10.0.0.2", 00:19:52.377 "trsvcid": "4420" 00:19:52.377 }, 00:19:52.377 "peer_address": { 00:19:52.377 "trtype": "TCP", 00:19:52.377 "adrfam": "IPv4", 00:19:52.377 "traddr": "10.0.0.1", 00:19:52.377 "trsvcid": "33504" 00:19:52.377 }, 00:19:52.377 "auth": { 00:19:52.377 "state": "completed", 00:19:52.377 "digest": "sha512", 00:19:52.377 "dhgroup": "ffdhe2048" 00:19:52.377 } 00:19:52.377 } 00:19:52.377 ]' 00:19:52.377 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.635 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.636 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.894 00:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.269 00:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.269 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.526 00:19:54.526 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.526 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.526 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.783 { 00:19:54.783 "cntlid": 111, 00:19:54.783 "qid": 0, 00:19:54.783 "state": "enabled", 00:19:54.783 "thread": "nvmf_tgt_poll_group_000", 00:19:54.783 "listen_address": { 00:19:54.783 "trtype": "TCP", 00:19:54.783 "adrfam": "IPv4", 00:19:54.783 "traddr": "10.0.0.2", 00:19:54.783 "trsvcid": "4420" 00:19:54.783 }, 00:19:54.783 "peer_address": { 00:19:54.783 "trtype": "TCP", 00:19:54.783 "adrfam": "IPv4", 00:19:54.783 "traddr": "10.0.0.1", 00:19:54.783 "trsvcid": "58876" 00:19:54.783 }, 00:19:54.783 "auth": { 00:19:54.783 "state": "completed", 00:19:54.783 "digest": "sha512", 00:19:54.783 "dhgroup": "ffdhe2048" 00:19:54.783 } 00:19:54.783 } 00:19:54.783 ]' 00:19:54.783 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.040 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.299 00:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:19:56.232 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.232 00:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:56.232 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.232 00:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.232 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.232 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.232 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.232 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:56.232 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:56.489 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:56.489 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.490 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.747 00:19:57.006 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.006 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.007 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.007 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.007 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.007 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.007 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.265 { 00:19:57.265 "cntlid": 113, 00:19:57.265 "qid": 0, 00:19:57.265 "state": "enabled", 00:19:57.265 "thread": "nvmf_tgt_poll_group_000", 00:19:57.265 "listen_address": { 00:19:57.265 "trtype": "TCP", 00:19:57.265 "adrfam": "IPv4", 00:19:57.265 "traddr": "10.0.0.2", 00:19:57.265 "trsvcid": "4420" 00:19:57.265 }, 00:19:57.265 "peer_address": { 00:19:57.265 "trtype": "TCP", 00:19:57.265 "adrfam": "IPv4", 00:19:57.265 "traddr": "10.0.0.1", 00:19:57.265 "trsvcid": "58904" 00:19:57.265 }, 00:19:57.265 "auth": { 00:19:57.265 "state": "completed", 00:19:57.265 "digest": "sha512", 00:19:57.265 "dhgroup": "ffdhe3072" 00:19:57.265 } 00:19:57.265 } 00:19:57.265 ]' 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.265 00:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.524 00:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.459 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.718 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.977 00:19:58.977 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.977 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.977 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.236 { 00:19:59.236 "cntlid": 115, 00:19:59.236 "qid": 0, 00:19:59.236 "state": "enabled", 00:19:59.236 "thread": "nvmf_tgt_poll_group_000", 00:19:59.236 "listen_address": { 00:19:59.236 "trtype": "TCP", 00:19:59.236 "adrfam": "IPv4", 00:19:59.236 "traddr": "10.0.0.2", 00:19:59.236 "trsvcid": "4420" 00:19:59.236 }, 00:19:59.236 "peer_address": { 00:19:59.236 "trtype": "TCP", 00:19:59.236 "adrfam": "IPv4", 00:19:59.236 "traddr": "10.0.0.1", 00:19:59.236 "trsvcid": "58932" 00:19:59.236 }, 00:19:59.236 "auth": { 00:19:59.236 "state": "completed", 00:19:59.236 "digest": "sha512", 00:19:59.236 "dhgroup": "ffdhe3072" 00:19:59.236 } 00:19:59.236 } 00:19:59.236 ]' 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.236 00:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.236 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.236 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.236 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.236 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.236 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.805 00:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.373 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.632 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.891 00:20:00.891 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.891 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.891 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.150 { 00:20:01.150 "cntlid": 117, 00:20:01.150 "qid": 0, 00:20:01.150 "state": "enabled", 00:20:01.150 "thread": "nvmf_tgt_poll_group_000", 00:20:01.150 "listen_address": { 00:20:01.150 "trtype": "TCP", 00:20:01.150 "adrfam": "IPv4", 00:20:01.150 "traddr": "10.0.0.2", 00:20:01.150 "trsvcid": "4420" 00:20:01.150 }, 00:20:01.150 "peer_address": { 00:20:01.150 "trtype": "TCP", 00:20:01.150 "adrfam": "IPv4", 00:20:01.150 "traddr": "10.0.0.1", 00:20:01.150 "trsvcid": "58964" 00:20:01.150 }, 00:20:01.150 "auth": { 00:20:01.150 "state": "completed", 00:20:01.150 "digest": "sha512", 00:20:01.150 "dhgroup": "ffdhe3072" 00:20:01.150 } 00:20:01.150 } 00:20:01.150 ]' 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.150 00:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.409 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.409 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.409 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.409 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.409 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.669 00:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.605 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.864 00:20:02.864 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.864 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.864 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.123 { 00:20:03.123 "cntlid": 119, 00:20:03.123 "qid": 0, 00:20:03.123 "state": "enabled", 00:20:03.123 "thread": "nvmf_tgt_poll_group_000", 00:20:03.123 "listen_address": { 00:20:03.123 "trtype": "TCP", 00:20:03.123 "adrfam": "IPv4", 00:20:03.123 "traddr": "10.0.0.2", 00:20:03.123 "trsvcid": "4420" 00:20:03.123 }, 00:20:03.123 "peer_address": { 00:20:03.123 "trtype": "TCP", 00:20:03.123 "adrfam": "IPv4", 00:20:03.123 "traddr": "10.0.0.1", 00:20:03.123 "trsvcid": "58984" 00:20:03.123 }, 00:20:03.123 "auth": { 00:20:03.123 "state": "completed", 00:20:03.123 "digest": "sha512", 00:20:03.123 "dhgroup": "ffdhe3072" 00:20:03.123 } 00:20:03.123 } 00:20:03.123 ]' 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.123 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.382 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.382 00:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.382 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.382 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.382 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.641 00:46:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:20:04.239 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.542 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.108 00:20:05.108 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.108 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.108 00:46:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.366 { 00:20:05.366 "cntlid": 121, 00:20:05.366 "qid": 0, 00:20:05.366 "state": "enabled", 00:20:05.366 "thread": "nvmf_tgt_poll_group_000", 00:20:05.366 "listen_address": { 00:20:05.366 "trtype": "TCP", 00:20:05.366 "adrfam": "IPv4", 00:20:05.366 "traddr": "10.0.0.2", 00:20:05.366 "trsvcid": "4420" 00:20:05.366 }, 00:20:05.366 "peer_address": { 00:20:05.366 "trtype": "TCP", 00:20:05.366 "adrfam": "IPv4", 00:20:05.366 "traddr": "10.0.0.1", 00:20:05.366 "trsvcid": "41906" 00:20:05.366 }, 00:20:05.366 "auth": { 00:20:05.366 "state": "completed", 00:20:05.366 "digest": "sha512", 00:20:05.366 "dhgroup": "ffdhe4096" 00:20:05.366 } 00:20:05.366 } 00:20:05.366 ]' 00:20:05.366 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.624 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.882 00:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.816 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.075 00:46:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.334 00:20:07.334 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.334 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.334 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.593 { 00:20:07.593 "cntlid": 123, 00:20:07.593 "qid": 0, 00:20:07.593 "state": "enabled", 00:20:07.593 "thread": "nvmf_tgt_poll_group_000", 00:20:07.593 "listen_address": { 00:20:07.593 "trtype": "TCP", 00:20:07.593 "adrfam": "IPv4", 00:20:07.593 "traddr": "10.0.0.2", 00:20:07.593 "trsvcid": "4420" 00:20:07.593 }, 00:20:07.593 "peer_address": { 00:20:07.593 "trtype": "TCP", 00:20:07.593 "adrfam": "IPv4", 00:20:07.593 "traddr": "10.0.0.1", 00:20:07.593 "trsvcid": "41932" 00:20:07.593 }, 00:20:07.593 "auth": { 00:20:07.593 "state": "completed", 00:20:07.593 "digest": "sha512", 00:20:07.593 "dhgroup": "ffdhe4096" 00:20:07.593 } 00:20:07.593 } 00:20:07.593 ]' 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.593 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.851 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.851 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.851 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.111 00:46:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:20:08.678 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.678 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:08.678 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.678 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.937 00:46:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.937 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.937 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.937 00:46:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.195 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.132 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.132 { 00:20:10.132 "cntlid": 125, 00:20:10.132 "qid": 0, 00:20:10.132 "state": "enabled", 00:20:10.132 "thread": "nvmf_tgt_poll_group_000", 00:20:10.132 "listen_address": { 00:20:10.132 "trtype": "TCP", 00:20:10.132 "adrfam": "IPv4", 00:20:10.132 "traddr": "10.0.0.2", 00:20:10.132 "trsvcid": "4420" 00:20:10.132 }, 00:20:10.132 "peer_address": { 00:20:10.132 "trtype": "TCP", 00:20:10.132 "adrfam": "IPv4", 00:20:10.132 "traddr": "10.0.0.1", 00:20:10.132 "trsvcid": "41974" 00:20:10.132 }, 00:20:10.132 "auth": { 00:20:10.132 "state": "completed", 00:20:10.132 "digest": "sha512", 00:20:10.132 "dhgroup": "ffdhe4096" 00:20:10.132 } 00:20:10.132 } 00:20:10.132 ]' 00:20:10.132 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.390 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.390 00:46:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.390 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.390 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.390 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.390 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.390 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.649 00:46:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.586 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.155 00:20:12.155 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.155 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.155 00:46:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.414 { 00:20:12.414 "cntlid": 127, 00:20:12.414 "qid": 0, 00:20:12.414 "state": "enabled", 00:20:12.414 "thread": "nvmf_tgt_poll_group_000", 00:20:12.414 "listen_address": { 00:20:12.414 "trtype": "TCP", 00:20:12.414 "adrfam": "IPv4", 00:20:12.414 "traddr": "10.0.0.2", 00:20:12.414 "trsvcid": "4420" 00:20:12.414 }, 00:20:12.414 "peer_address": { 00:20:12.414 "trtype": "TCP", 00:20:12.414 "adrfam": "IPv4", 00:20:12.414 "traddr": "10.0.0.1", 00:20:12.414 "trsvcid": "41992" 00:20:12.414 }, 00:20:12.414 "auth": { 00:20:12.414 "state": "completed", 00:20:12.414 "digest": "sha512", 00:20:12.414 "dhgroup": "ffdhe4096" 00:20:12.414 } 00:20:12.414 } 00:20:12.414 ]' 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.414 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.672 00:46:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.609 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.869 00:46:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.447 00:20:14.448 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.448 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.448 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.711 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.712 { 00:20:14.712 "cntlid": 129, 00:20:14.712 "qid": 0, 00:20:14.712 "state": "enabled", 00:20:14.712 "thread": "nvmf_tgt_poll_group_000", 00:20:14.712 "listen_address": { 00:20:14.712 "trtype": "TCP", 00:20:14.712 "adrfam": "IPv4", 00:20:14.712 "traddr": "10.0.0.2", 00:20:14.712 "trsvcid": "4420" 00:20:14.712 }, 00:20:14.712 "peer_address": { 00:20:14.712 "trtype": "TCP", 00:20:14.712 "adrfam": "IPv4", 00:20:14.712 "traddr": "10.0.0.1", 00:20:14.712 "trsvcid": "34266" 00:20:14.712 }, 00:20:14.712 "auth": { 00:20:14.712 "state": "completed", 00:20:14.712 "digest": "sha512", 00:20:14.712 "dhgroup": "ffdhe6144" 00:20:14.712 } 00:20:14.712 } 00:20:14.712 ]' 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.712 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.970 00:46:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:15.905 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.164 00:46:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.728 00:20:16.728 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.728 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.728 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.986 { 00:20:16.986 "cntlid": 131, 00:20:16.986 "qid": 0, 00:20:16.986 "state": "enabled", 00:20:16.986 "thread": "nvmf_tgt_poll_group_000", 00:20:16.986 "listen_address": { 00:20:16.986 "trtype": "TCP", 00:20:16.986 "adrfam": "IPv4", 00:20:16.986 "traddr": "10.0.0.2", 00:20:16.986 "trsvcid": "4420" 00:20:16.986 }, 00:20:16.986 "peer_address": { 00:20:16.986 "trtype": "TCP", 00:20:16.986 "adrfam": "IPv4", 00:20:16.986 "traddr": "10.0.0.1", 00:20:16.986 "trsvcid": "34282" 00:20:16.986 }, 00:20:16.986 "auth": { 00:20:16.986 "state": "completed", 00:20:16.986 "digest": "sha512", 00:20:16.986 "dhgroup": "ffdhe6144" 00:20:16.986 } 00:20:16.986 } 00:20:16.986 ]' 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.986 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.243 00:46:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.179 00:46:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.179 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.437 00:46:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.437 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.437 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.005 00:20:19.005 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.005 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.005 00:46:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.263 { 00:20:19.263 "cntlid": 133, 00:20:19.263 "qid": 0, 00:20:19.263 "state": "enabled", 00:20:19.263 "thread": "nvmf_tgt_poll_group_000", 00:20:19.263 "listen_address": { 00:20:19.263 "trtype": "TCP", 00:20:19.263 "adrfam": "IPv4", 00:20:19.263 "traddr": "10.0.0.2", 00:20:19.263 "trsvcid": "4420" 00:20:19.263 }, 00:20:19.263 "peer_address": { 00:20:19.263 "trtype": "TCP", 00:20:19.263 "adrfam": "IPv4", 00:20:19.263 "traddr": "10.0.0.1", 00:20:19.263 "trsvcid": "34310" 00:20:19.263 }, 00:20:19.263 "auth": { 00:20:19.263 "state": "completed", 00:20:19.263 "digest": "sha512", 00:20:19.263 "dhgroup": "ffdhe6144" 00:20:19.263 } 00:20:19.263 } 00:20:19.263 ]' 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.263 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.522 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.522 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.522 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.522 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.522 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.781 00:46:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.348 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:20.916 00:46:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.853 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.853 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.853 { 00:20:21.853 "cntlid": 135, 00:20:21.853 "qid": 0, 00:20:21.853 "state": "enabled", 00:20:21.853 "thread": "nvmf_tgt_poll_group_000", 00:20:21.853 "listen_address": { 00:20:21.853 "trtype": "TCP", 00:20:21.853 "adrfam": "IPv4", 00:20:21.853 "traddr": "10.0.0.2", 00:20:21.853 "trsvcid": "4420" 00:20:21.853 }, 00:20:21.853 "peer_address": { 00:20:21.853 "trtype": "TCP", 00:20:21.854 "adrfam": "IPv4", 00:20:21.854 "traddr": "10.0.0.1", 00:20:21.854 "trsvcid": "34342" 00:20:21.854 }, 00:20:21.854 "auth": { 00:20:21.854 "state": "completed", 00:20:21.854 "digest": "sha512", 00:20:21.854 "dhgroup": "ffdhe6144" 00:20:21.854 } 00:20:21.854 } 00:20:21.854 ]' 00:20:21.854 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.112 00:46:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.370 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.934 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:22.935 00:46:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.192 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.124 00:20:24.124 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.124 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.124 00:46:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.382 { 00:20:24.382 "cntlid": 137, 00:20:24.382 "qid": 0, 00:20:24.382 "state": "enabled", 00:20:24.382 "thread": "nvmf_tgt_poll_group_000", 00:20:24.382 "listen_address": { 00:20:24.382 "trtype": "TCP", 00:20:24.382 "adrfam": "IPv4", 00:20:24.382 "traddr": "10.0.0.2", 00:20:24.382 "trsvcid": "4420" 00:20:24.382 }, 00:20:24.382 "peer_address": { 00:20:24.382 "trtype": "TCP", 00:20:24.382 "adrfam": "IPv4", 00:20:24.382 "traddr": "10.0.0.1", 00:20:24.382 "trsvcid": "34370" 00:20:24.382 }, 00:20:24.382 "auth": { 00:20:24.382 "state": "completed", 00:20:24.382 "digest": "sha512", 00:20:24.382 "dhgroup": "ffdhe8192" 00:20:24.382 } 00:20:24.382 } 00:20:24.382 ]' 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.382 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.641 00:46:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.574 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.832 00:46:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.399 00:20:26.399 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.399 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.399 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.657 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.657 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.657 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.657 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.915 { 00:20:26.915 "cntlid": 139, 00:20:26.915 "qid": 0, 00:20:26.915 "state": "enabled", 00:20:26.915 "thread": "nvmf_tgt_poll_group_000", 00:20:26.915 "listen_address": { 00:20:26.915 "trtype": "TCP", 00:20:26.915 "adrfam": "IPv4", 00:20:26.915 "traddr": "10.0.0.2", 00:20:26.915 "trsvcid": "4420" 00:20:26.915 }, 00:20:26.915 "peer_address": { 00:20:26.915 "trtype": "TCP", 00:20:26.915 "adrfam": "IPv4", 00:20:26.915 "traddr": "10.0.0.1", 00:20:26.915 "trsvcid": "55598" 00:20:26.915 }, 00:20:26.915 "auth": { 00:20:26.915 "state": "completed", 00:20:26.915 "digest": "sha512", 00:20:26.915 "dhgroup": "ffdhe8192" 00:20:26.915 } 00:20:26.915 } 00:20:26.915 ]' 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.915 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.173 00:46:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:01:N2U2MGJiOGE0Mzg2ODQ0ZDA3YjA5NmM5ODhmM2I0MzNH7hJr: --dhchap-ctrl-secret DHHC-1:02:MWUxYTViM2UxZTQ4NTE4OGE5MzZjNjU4NGY0N2ZkZGQ3MGI4NzY0MTA3NWRlYmEzF9l2vQ==: 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.109 00:46:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.367 00:46:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.744 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.744 { 00:20:29.744 "cntlid": 141, 00:20:29.744 "qid": 0, 00:20:29.744 "state": "enabled", 00:20:29.744 "thread": "nvmf_tgt_poll_group_000", 00:20:29.744 "listen_address": { 00:20:29.744 "trtype": "TCP", 00:20:29.744 "adrfam": "IPv4", 00:20:29.744 "traddr": "10.0.0.2", 00:20:29.744 "trsvcid": "4420" 00:20:29.744 }, 00:20:29.744 "peer_address": { 00:20:29.744 "trtype": "TCP", 00:20:29.744 "adrfam": "IPv4", 00:20:29.744 "traddr": "10.0.0.1", 00:20:29.744 "trsvcid": "55612" 00:20:29.744 }, 00:20:29.744 "auth": { 00:20:29.744 "state": "completed", 00:20:29.744 "digest": "sha512", 00:20:29.744 "dhgroup": "ffdhe8192" 00:20:29.744 } 00:20:29.744 } 00:20:29.744 ]' 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.744 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.003 00:46:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MDcyMzYwNjlmNDQyYTI1ZWJmMjcxN2NiNWY1NDk1N2QzMzJmNGYzMTM3NDU2Y2MzqrK5lA==: --dhchap-ctrl-secret DHHC-1:01:MTA3ZThjMzk0YjUxYTcwZTBiYjJkZDUzZDNjOTI5NjZSyXPc: 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:31.378 00:46:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:31.378 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.379 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.945 00:20:32.203 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.203 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.203 00:46:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.488 { 00:20:32.488 "cntlid": 143, 00:20:32.488 "qid": 0, 00:20:32.488 "state": "enabled", 00:20:32.488 "thread": "nvmf_tgt_poll_group_000", 00:20:32.488 "listen_address": { 00:20:32.488 "trtype": "TCP", 00:20:32.488 "adrfam": "IPv4", 00:20:32.488 "traddr": "10.0.0.2", 00:20:32.488 "trsvcid": "4420" 00:20:32.488 }, 00:20:32.488 "peer_address": { 00:20:32.488 "trtype": "TCP", 00:20:32.488 "adrfam": "IPv4", 00:20:32.488 "traddr": "10.0.0.1", 00:20:32.488 "trsvcid": "55632" 00:20:32.488 }, 00:20:32.488 "auth": { 00:20:32.488 "state": "completed", 00:20:32.488 "digest": "sha512", 00:20:32.488 "dhgroup": "ffdhe8192" 00:20:32.488 } 00:20:32.488 } 00:20:32.488 ]' 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.488 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.773 00:46:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.708 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.968 00:46:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.533 00:20:34.533 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.533 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.533 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.791 { 00:20:34.791 "cntlid": 145, 00:20:34.791 "qid": 0, 00:20:34.791 "state": "enabled", 00:20:34.791 "thread": "nvmf_tgt_poll_group_000", 00:20:34.791 "listen_address": { 00:20:34.791 "trtype": "TCP", 00:20:34.791 "adrfam": "IPv4", 00:20:34.791 "traddr": "10.0.0.2", 00:20:34.791 "trsvcid": "4420" 00:20:34.791 }, 00:20:34.791 "peer_address": { 00:20:34.791 "trtype": "TCP", 00:20:34.791 "adrfam": "IPv4", 00:20:34.791 "traddr": "10.0.0.1", 00:20:34.791 "trsvcid": "54274" 00:20:34.791 }, 00:20:34.791 "auth": { 00:20:34.791 "state": "completed", 00:20:34.791 "digest": "sha512", 00:20:34.791 "dhgroup": "ffdhe8192" 00:20:34.791 } 00:20:34.791 } 00:20:34.791 ]' 00:20:34.791 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.049 00:46:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.613 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDkwNzE1OGJlODZkOWUwZjUzNjA1YjQyNDc3MzBmNjlkNjI2NGY5NTA5ZmE5NzAww33+wg==: --dhchap-ctrl-secret DHHC-1:03:YWVhZmRmZjExNzU0OTUwYTA2NTg5YTI4YzBkMGFhNjQ5MzM2NDNjNGJkZjRjOTg3NmUzZDRiMjUxYjQ2YjgyY/OxiYw=: 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:36.179 00:46:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:36.179 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:37.116 request: 00:20:37.116 { 00:20:37.116 "name": "nvme0", 00:20:37.116 "trtype": "tcp", 00:20:37.116 "traddr": "10.0.0.2", 00:20:37.116 "adrfam": "ipv4", 00:20:37.116 "trsvcid": "4420", 00:20:37.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:37.116 "prchk_reftag": false, 00:20:37.116 "prchk_guard": false, 00:20:37.116 "hdgst": false, 00:20:37.116 "ddgst": false, 00:20:37.116 "dhchap_key": "key2", 00:20:37.116 "method": "bdev_nvme_attach_controller", 00:20:37.116 "req_id": 1 00:20:37.116 } 00:20:37.116 Got JSON-RPC error response 00:20:37.116 response: 00:20:37.116 { 00:20:37.116 "code": -5, 00:20:37.116 "message": "Input/output error" 00:20:37.116 } 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:37.116 00:46:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:38.054 request: 00:20:38.054 { 00:20:38.054 "name": "nvme0", 00:20:38.054 "trtype": "tcp", 00:20:38.054 "traddr": "10.0.0.2", 00:20:38.054 "adrfam": "ipv4", 00:20:38.054 "trsvcid": "4420", 00:20:38.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:38.054 "prchk_reftag": false, 00:20:38.054 "prchk_guard": false, 00:20:38.054 "hdgst": false, 00:20:38.054 "ddgst": false, 00:20:38.054 "dhchap_key": "key1", 00:20:38.054 "dhchap_ctrlr_key": "ckey2", 00:20:38.054 "method": "bdev_nvme_attach_controller", 00:20:38.054 "req_id": 1 00:20:38.054 } 00:20:38.054 Got JSON-RPC error response 00:20:38.054 response: 00:20:38.054 { 00:20:38.054 "code": -5, 00:20:38.054 "message": "Input/output error" 00:20:38.054 } 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key1 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:38.054 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:38.055 00:46:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.055 00:46:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.623 request: 00:20:38.623 { 00:20:38.623 "name": "nvme0", 00:20:38.623 "trtype": "tcp", 00:20:38.623 "traddr": "10.0.0.2", 00:20:38.623 "adrfam": "ipv4", 00:20:38.623 "trsvcid": "4420", 00:20:38.623 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:38.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:38.623 "prchk_reftag": false, 00:20:38.623 "prchk_guard": false, 00:20:38.623 "hdgst": false, 00:20:38.623 "ddgst": false, 00:20:38.623 "dhchap_key": "key1", 00:20:38.623 "dhchap_ctrlr_key": "ckey1", 00:20:38.623 "method": "bdev_nvme_attach_controller", 00:20:38.623 "req_id": 1 00:20:38.623 } 00:20:38.623 Got JSON-RPC error response 00:20:38.623 response: 00:20:38.623 { 00:20:38.623 "code": -5, 00:20:38.623 "message": "Input/output error" 00:20:38.623 } 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3036401 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3036401 ']' 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3036401 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3036401 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3036401' 00:20:38.623 killing process with pid 3036401 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3036401 00:20:38.623 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3036401 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3068369 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3068369 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3068369 ']' 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.881 00:46:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3068369 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3068369 ']' 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.259 00:46:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.518 00:46:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.455 00:20:41.715 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.715 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.715 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.974 { 00:20:41.974 "cntlid": 1, 00:20:41.974 "qid": 0, 00:20:41.974 "state": "enabled", 00:20:41.974 "thread": "nvmf_tgt_poll_group_000", 00:20:41.974 "listen_address": { 00:20:41.974 "trtype": "TCP", 00:20:41.974 "adrfam": "IPv4", 00:20:41.974 "traddr": "10.0.0.2", 00:20:41.974 "trsvcid": "4420" 00:20:41.974 }, 00:20:41.974 "peer_address": { 00:20:41.974 "trtype": "TCP", 00:20:41.974 "adrfam": "IPv4", 00:20:41.974 "traddr": "10.0.0.1", 00:20:41.974 "trsvcid": "54332" 00:20:41.974 }, 00:20:41.974 "auth": { 00:20:41.974 "state": "completed", 00:20:41.974 "digest": "sha512", 00:20:41.974 "dhgroup": "ffdhe8192" 00:20:41.974 } 00:20:41.974 } 00:20:41.974 ]' 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.974 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.232 00:46:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid 00abaa28-3537-eb11-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTcwNmJkMjk3NmEzYjA4Yzg4ZWFlZmEwNWNmM2I2NWZmZTJlMmM0NmQ0NTI3NWQ0YzQ3OTJmMDlkM2UyMmU4MGRHfX0=: 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --dhchap-key key3 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:43.169 00:47:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.428 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.687 request: 00:20:43.687 { 00:20:43.687 "name": "nvme0", 00:20:43.687 "trtype": "tcp", 00:20:43.687 "traddr": "10.0.0.2", 00:20:43.687 "adrfam": "ipv4", 00:20:43.687 "trsvcid": "4420", 00:20:43.687 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:43.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:43.687 "prchk_reftag": false, 00:20:43.687 "prchk_guard": false, 00:20:43.687 "hdgst": false, 00:20:43.687 "ddgst": false, 00:20:43.687 "dhchap_key": "key3", 00:20:43.687 "method": "bdev_nvme_attach_controller", 00:20:43.687 "req_id": 1 00:20:43.687 } 00:20:43.687 Got JSON-RPC error response 00:20:43.687 response: 00:20:43.687 { 00:20:43.687 "code": -5, 00:20:43.687 "message": "Input/output error" 00:20:43.687 } 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:43.687 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.946 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.204 request: 00:20:44.204 { 00:20:44.205 "name": "nvme0", 00:20:44.205 "trtype": "tcp", 00:20:44.205 "traddr": "10.0.0.2", 00:20:44.205 "adrfam": "ipv4", 00:20:44.205 "trsvcid": "4420", 00:20:44.205 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:44.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:44.205 "prchk_reftag": false, 00:20:44.205 "prchk_guard": false, 00:20:44.205 "hdgst": false, 00:20:44.205 "ddgst": false, 00:20:44.205 "dhchap_key": "key3", 00:20:44.205 "method": "bdev_nvme_attach_controller", 00:20:44.205 "req_id": 1 00:20:44.205 } 00:20:44.205 Got JSON-RPC error response 00:20:44.205 response: 00:20:44.205 { 00:20:44.205 "code": -5, 00:20:44.205 "message": "Input/output error" 00:20:44.205 } 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.205 00:47:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.463 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:44.464 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:44.722 request: 00:20:44.722 { 00:20:44.722 "name": "nvme0", 00:20:44.722 "trtype": "tcp", 00:20:44.722 "traddr": "10.0.0.2", 00:20:44.722 "adrfam": "ipv4", 00:20:44.722 "trsvcid": "4420", 00:20:44.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:44.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562", 00:20:44.722 "prchk_reftag": false, 00:20:44.722 "prchk_guard": false, 00:20:44.722 "hdgst": false, 00:20:44.722 "ddgst": false, 00:20:44.722 "dhchap_key": "key0", 00:20:44.722 "dhchap_ctrlr_key": "key1", 00:20:44.722 "method": "bdev_nvme_attach_controller", 00:20:44.722 "req_id": 1 00:20:44.722 } 00:20:44.722 Got JSON-RPC error response 00:20:44.722 response: 00:20:44.722 { 00:20:44.722 "code": -5, 00:20:44.722 "message": "Input/output error" 00:20:44.722 } 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:44.722 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:44.981 00:20:44.981 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:44.981 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:44.981 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.240 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.240 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.240 00:47:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.498 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:45.498 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:45.498 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3036551 00:20:45.498 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3036551 ']' 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3036551 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3036551 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3036551' 00:20:45.499 killing process with pid 3036551 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3036551 00:20:45.499 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3036551 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.064 rmmod nvme_tcp 00:20:46.064 rmmod nvme_fabrics 00:20:46.064 rmmod nvme_keyring 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3068369 ']' 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3068369 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3068369 ']' 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3068369 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3068369 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3068369' 00:20:46.064 killing process with pid 3068369 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3068369 00:20:46.064 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3068369 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.322 00:47:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.226 00:47:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:48.226 00:47:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.VUn /tmp/spdk.key-sha256.ry9 /tmp/spdk.key-sha384.rGU /tmp/spdk.key-sha512.XjQ /tmp/spdk.key-sha512.9Bw /tmp/spdk.key-sha384.77I /tmp/spdk.key-sha256.dc7 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:48.226 00:20:48.226 real 3m7.745s 00:20:48.226 user 7m20.530s 00:20:48.226 sys 0m25.398s 00:20:48.226 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:48.226 00:47:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.226 ************************************ 00:20:48.226 END TEST nvmf_auth_target 00:20:48.226 ************************************ 00:20:48.226 00:47:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:48.226 00:47:06 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:48.226 00:47:06 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:48.226 00:47:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:48.226 00:47:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.226 00:47:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:48.485 ************************************ 00:20:48.485 START TEST nvmf_bdevio_no_huge 00:20:48.485 ************************************ 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:48.485 * Looking for test storage... 00:20:48.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.485 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:48.486 00:47:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.052 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:55.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:55.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:55.053 Found net devices under 0000:af:00.0: cvl_0_0 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:55.053 Found net devices under 0000:af:00.1: cvl_0_1 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:20:55.053 00:20:55.053 --- 10.0.0.2 ping statistics --- 00:20:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.053 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:20:55.053 00:20:55.053 --- 10.0.0.1 ping statistics --- 00:20:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.053 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3073196 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3073196 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3073196 ']' 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.053 00:47:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.053 [2024-07-16 00:47:12.009283] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:20:55.054 [2024-07-16 00:47:12.009343] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:55.054 [2024-07-16 00:47:12.120487] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.054 [2024-07-16 00:47:12.354206] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.054 [2024-07-16 00:47:12.354276] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.054 [2024-07-16 00:47:12.354298] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.054 [2024-07-16 00:47:12.354315] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.054 [2024-07-16 00:47:12.354332] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.054 [2024-07-16 00:47:12.354469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.054 [2024-07-16 00:47:12.354510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:55.054 [2024-07-16 00:47:12.354625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:55.054 [2024-07-16 00:47:12.354630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.312 00:47:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 [2024-07-16 00:47:13.000663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 Malloc0 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:55.312 [2024-07-16 00:47:13.054662] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:55.312 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.313 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.313 { 00:20:55.313 "params": { 00:20:55.313 "name": "Nvme$subsystem", 00:20:55.313 "trtype": "$TEST_TRANSPORT", 00:20:55.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.313 "adrfam": "ipv4", 00:20:55.313 "trsvcid": "$NVMF_PORT", 00:20:55.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.313 "hdgst": ${hdgst:-false}, 00:20:55.313 "ddgst": ${ddgst:-false} 00:20:55.313 }, 00:20:55.313 "method": "bdev_nvme_attach_controller" 00:20:55.313 } 00:20:55.313 EOF 00:20:55.313 )") 00:20:55.313 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:55.313 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:55.313 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:55.313 00:47:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:55.313 "params": { 00:20:55.313 "name": "Nvme1", 00:20:55.313 "trtype": "tcp", 00:20:55.313 "traddr": "10.0.0.2", 00:20:55.313 "adrfam": "ipv4", 00:20:55.313 "trsvcid": "4420", 00:20:55.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.313 "hdgst": false, 00:20:55.313 "ddgst": false 00:20:55.313 }, 00:20:55.313 "method": "bdev_nvme_attach_controller" 00:20:55.313 }' 00:20:55.313 [2024-07-16 00:47:13.107265] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:20:55.313 [2024-07-16 00:47:13.107325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3073475 ] 00:20:55.570 [2024-07-16 00:47:13.192496] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:55.570 [2024-07-16 00:47:13.310806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.570 [2024-07-16 00:47:13.310922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.570 [2024-07-16 00:47:13.310922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.828 I/O targets: 00:20:55.828 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:55.828 00:20:55.828 00:20:55.828 CUnit - A unit testing framework for C - Version 2.1-3 00:20:55.828 http://cunit.sourceforge.net/ 00:20:55.828 00:20:55.828 00:20:55.828 Suite: bdevio tests on: Nvme1n1 00:20:55.828 Test: blockdev write read block ...passed 00:20:55.828 Test: blockdev write zeroes read block ...passed 00:20:55.828 Test: blockdev write zeroes read no split ...passed 00:20:55.828 Test: blockdev write zeroes read split ...passed 00:20:56.085 Test: blockdev write zeroes read split partial ...passed 00:20:56.085 Test: blockdev reset ...[2024-07-16 00:47:13.685959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:56.085 [2024-07-16 00:47:13.686034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14406c0 (9): Bad file descriptor 00:20:56.085 [2024-07-16 00:47:13.783377] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:56.085 passed 00:20:56.085 Test: blockdev write read 8 blocks ...passed 00:20:56.085 Test: blockdev write read size > 128k ...passed 00:20:56.085 Test: blockdev write read invalid size ...passed 00:20:56.085 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:56.085 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:56.085 Test: blockdev write read max offset ...passed 00:20:56.085 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:56.343 Test: blockdev writev readv 8 blocks ...passed 00:20:56.343 Test: blockdev writev readv 30 x 1block ...passed 00:20:56.343 Test: blockdev writev readv block ...passed 00:20:56.343 Test: blockdev writev readv size > 128k ...passed 00:20:56.343 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:56.343 Test: blockdev comparev and writev ...[2024-07-16 00:47:14.041073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.343 [2024-07-16 00:47:14.041138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.041181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.041205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.041812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.041845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.041882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.041904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.042557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.042594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.042615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.043214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.043245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.043294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:56.344 [2024-07-16 00:47:14.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.344 passed 00:20:56.344 Test: blockdev nvme passthru rw ...passed 00:20:56.344 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:47:14.125823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:56.344 [2024-07-16 00:47:14.125864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.126129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:56.344 [2024-07-16 00:47:14.126158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.126442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:56.344 [2024-07-16 00:47:14.126472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.344 [2024-07-16 00:47:14.126739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:56.344 [2024-07-16 00:47:14.126769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.344 passed 00:20:56.344 Test: blockdev nvme admin passthru ...passed 00:20:56.344 Test: blockdev copy ...passed 00:20:56.344 00:20:56.344 Run Summary: Type Total Ran Passed Failed Inactive 00:20:56.344 suites 1 1 n/a 0 0 00:20:56.344 tests 23 23 23 0 0 00:20:56.344 asserts 152 152 152 0 n/a 00:20:56.344 00:20:56.344 Elapsed time = 1.399 seconds 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.911 rmmod nvme_tcp 00:20:56.911 rmmod nvme_fabrics 00:20:56.911 rmmod nvme_keyring 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3073196 ']' 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3073196 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3073196 ']' 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3073196 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3073196 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3073196' 00:20:56.911 killing process with pid 3073196 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3073196 00:20:56.911 00:47:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3073196 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.847 00:47:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.753 00:47:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.753 00:20:59.753 real 0m11.423s 00:20:59.753 user 0m15.234s 00:20:59.753 sys 0m5.827s 00:20:59.753 00:47:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.753 00:47:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:59.753 ************************************ 00:20:59.753 END TEST nvmf_bdevio_no_huge 00:20:59.753 ************************************ 00:20:59.753 00:47:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:59.753 00:47:17 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:59.753 00:47:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.753 00:47:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.753 00:47:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:59.753 ************************************ 00:20:59.753 START TEST nvmf_tls 00:20:59.753 ************************************ 00:20:59.753 00:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:00.012 * Looking for test storage... 00:21:00.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:21:00.012 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.013 00:47:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:05.289 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:05.289 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:05.289 Found net devices under 0000:af:00.0: cvl_0_0 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:05.289 Found net devices under 0000:af:00.1: cvl_0_1 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.289 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:21:05.549 00:21:05.549 --- 10.0.0.2 ping statistics --- 00:21:05.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.549 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:21:05.549 00:21:05.549 --- 10.0.0.1 ping statistics --- 00:21:05.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.549 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.549 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3077465 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3077465 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3077465 ']' 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.807 00:47:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.807 [2024-07-16 00:47:23.443671] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:05.807 [2024-07-16 00:47:23.443727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.807 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.807 [2024-07-16 00:47:23.534504] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.807 [2024-07-16 00:47:23.638458] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.807 [2024-07-16 00:47:23.638507] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.807 [2024-07-16 00:47:23.638519] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.807 [2024-07-16 00:47:23.638530] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.807 [2024-07-16 00:47:23.638539] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.807 [2024-07-16 00:47:23.638573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:06.743 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:07.002 true 00:21:07.002 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:07.002 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:07.276 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:07.276 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:07.276 00:47:24 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:07.576 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:07.576 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:07.864 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:07.864 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:07.864 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:08.124 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.124 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:08.124 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:08.124 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:08.124 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.125 00:47:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:08.383 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:08.383 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:08.383 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:08.643 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:08.643 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:08.902 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:08.902 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:08.902 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:09.161 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:09.161 00:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:09.420 00:47:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qMjX2YnxCF 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.agcjQkjSvw 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qMjX2YnxCF 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.agcjQkjSvw 00:21:09.678 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:09.937 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:10.196 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qMjX2YnxCF 00:21:10.196 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qMjX2YnxCF 00:21:10.196 00:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.455 [2024-07-16 00:47:28.083848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.455 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.713 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.972 [2024-07-16 00:47:28.553110] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.972 [2024-07-16 00:47:28.553366] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.972 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.972 malloc0 00:21:11.231 00:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.231 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMjX2YnxCF 00:21:11.490 [2024-07-16 00:47:29.261374] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.490 00:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qMjX2YnxCF 00:21:11.490 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.729 Initializing NVMe Controllers 00:21:23.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.729 Initialization complete. Launching workers. 00:21:23.729 ======================================================== 00:21:23.729 Latency(us) 00:21:23.729 Device Information : IOPS MiB/s Average min max 00:21:23.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8392.79 32.78 7627.86 1129.64 9178.52 00:21:23.729 ======================================================== 00:21:23.729 Total : 8392.79 32.78 7627.86 1129.64 9178.52 00:21:23.729 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMjX2YnxCF 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMjX2YnxCF' 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3080190 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3080190 /var/tmp/bdevperf.sock 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3080190 ']' 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.729 00:47:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.729 [2024-07-16 00:47:39.465566] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:23.729 [2024-07-16 00:47:39.465627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080190 ] 00:21:23.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.729 [2024-07-16 00:47:39.580745] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.729 [2024-07-16 00:47:39.728844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.729 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.729 00:47:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.729 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMjX2YnxCF 00:21:23.729 [2024-07-16 00:47:40.651378] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.729 [2024-07-16 00:47:40.651532] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.729 TLSTESTn1 00:21:23.729 00:47:40 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:23.729 Running I/O for 10 seconds... 00:21:33.706 00:21:33.706 Latency(us) 00:21:33.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.706 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.706 Verification LBA range: start 0x0 length 0x2000 00:21:33.706 TLSTESTn1 : 10.02 2795.44 10.92 0.00 0.00 45665.18 9830.40 45994.36 00:21:33.706 =================================================================================================================== 00:21:33.706 Total : 2795.44 10.92 0.00 0.00 45665.18 9830.40 45994.36 00:21:33.706 0 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3080190 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3080190 ']' 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3080190 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3080190 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3080190' 00:21:33.706 killing process with pid 3080190 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3080190 00:21:33.706 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.706 00:21:33.706 Latency(us) 00:21:33.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.706 =================================================================================================================== 00:21:33.706 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.706 [2024-07-16 00:47:50.989173] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:33.706 00:47:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3080190 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.agcjQkjSvw 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.agcjQkjSvw 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.agcjQkjSvw 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.agcjQkjSvw' 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3082256 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3082256 /var/tmp/bdevperf.sock 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3082256 ']' 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.706 00:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.706 [2024-07-16 00:47:51.400880] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:33.706 [2024-07-16 00:47:51.400949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082256 ] 00:21:33.706 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.706 [2024-07-16 00:47:51.520412] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.967 [2024-07-16 00:47:51.664337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.541 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.541 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:34.541 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.agcjQkjSvw 00:21:34.799 [2024-07-16 00:47:52.584889] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.799 [2024-07-16 00:47:52.585038] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.799 [2024-07-16 00:47:52.594281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:34.799 [2024-07-16 00:47:52.594517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167cd20 (107): Transport endpoint is not connected 00:21:34.799 [2024-07-16 00:47:52.595498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167cd20 (9): Bad file descriptor 00:21:34.799 [2024-07-16 00:47:52.596504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.799 [2024-07-16 00:47:52.596532] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:34.799 [2024-07-16 00:47:52.596552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.799 request: 00:21:34.799 { 00:21:34.799 "name": "TLSTEST", 00:21:34.799 "trtype": "tcp", 00:21:34.799 "traddr": "10.0.0.2", 00:21:34.799 "adrfam": "ipv4", 00:21:34.799 "trsvcid": "4420", 00:21:34.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.799 "prchk_reftag": false, 00:21:34.799 "prchk_guard": false, 00:21:34.799 "hdgst": false, 00:21:34.799 "ddgst": false, 00:21:34.799 "psk": "/tmp/tmp.agcjQkjSvw", 00:21:34.799 "method": "bdev_nvme_attach_controller", 00:21:34.799 "req_id": 1 00:21:34.799 } 00:21:34.799 Got JSON-RPC error response 00:21:34.799 response: 00:21:34.799 { 00:21:34.799 "code": -5, 00:21:34.799 "message": "Input/output error" 00:21:34.799 } 00:21:34.799 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3082256 00:21:34.799 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3082256 ']' 00:21:34.799 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3082256 00:21:34.799 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.799 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.800 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082256 00:21:35.058 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:35.058 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:35.058 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082256' 00:21:35.058 killing process with pid 3082256 00:21:35.058 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3082256 00:21:35.058 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.058 00:21:35.058 Latency(us) 00:21:35.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.058 =================================================================================================================== 00:21:35.058 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.058 [2024-07-16 00:47:52.679430] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.058 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3082256 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMjX2YnxCF 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMjX2YnxCF 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMjX2YnxCF 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMjX2YnxCF' 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3082541 00:21:35.317 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3082541 /var/tmp/bdevperf.sock 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3082541 ']' 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.318 00:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.318 [2024-07-16 00:47:53.043194] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:35.318 [2024-07-16 00:47:53.043274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082541 ] 00:21:35.318 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.577 [2024-07-16 00:47:53.161274] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.577 [2024-07-16 00:47:53.303024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.514 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.514 00:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.514 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qMjX2YnxCF 00:21:36.514 [2024-07-16 00:47:54.226785] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.514 [2024-07-16 00:47:54.226946] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:36.514 [2024-07-16 00:47:54.237845] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:36.514 [2024-07-16 00:47:54.237882] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:36.514 [2024-07-16 00:47:54.237926] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:36.514 [2024-07-16 00:47:54.238349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4ed20 (107): Transport endpoint is not connected 00:21:36.515 [2024-07-16 00:47:54.239331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4ed20 (9): Bad file descriptor 00:21:36.515 [2024-07-16 00:47:54.240330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.515 [2024-07-16 00:47:54.240356] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:36.515 [2024-07-16 00:47:54.240376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.515 request: 00:21:36.515 { 00:21:36.515 "name": "TLSTEST", 00:21:36.515 "trtype": "tcp", 00:21:36.515 "traddr": "10.0.0.2", 00:21:36.515 "adrfam": "ipv4", 00:21:36.515 "trsvcid": "4420", 00:21:36.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.515 "prchk_reftag": false, 00:21:36.515 "prchk_guard": false, 00:21:36.515 "hdgst": false, 00:21:36.515 "ddgst": false, 00:21:36.515 "psk": "/tmp/tmp.qMjX2YnxCF", 00:21:36.515 "method": "bdev_nvme_attach_controller", 00:21:36.515 "req_id": 1 00:21:36.515 } 00:21:36.515 Got JSON-RPC error response 00:21:36.515 response: 00:21:36.515 { 00:21:36.515 "code": -5, 00:21:36.515 "message": "Input/output error" 00:21:36.515 } 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3082541 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3082541 ']' 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3082541 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082541 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082541' 00:21:36.515 killing process with pid 3082541 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3082541 00:21:36.515 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.515 00:21:36.515 Latency(us) 00:21:36.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.515 =================================================================================================================== 00:21:36.515 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.515 [2024-07-16 00:47:54.321280] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:36.515 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3082541 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMjX2YnxCF 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMjX2YnxCF 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMjX2YnxCF 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMjX2YnxCF' 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3082862 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3082862 /var/tmp/bdevperf.sock 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3082862 ']' 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.774 00:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.034 [2024-07-16 00:47:54.658007] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:37.034 [2024-07-16 00:47:54.658074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082862 ] 00:21:37.034 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.034 [2024-07-16 00:47:54.774608] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.293 [2024-07-16 00:47:54.917207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.862 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.862 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:37.862 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMjX2YnxCF 00:21:38.121 [2024-07-16 00:47:55.842580] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.121 [2024-07-16 00:47:55.842738] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:38.121 [2024-07-16 00:47:55.852128] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:38.121 [2024-07-16 00:47:55.852161] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:38.121 [2024-07-16 00:47:55.852205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:38.121 [2024-07-16 00:47:55.853186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eefd20 (107): Transport endpoint is not connected 00:21:38.122 [2024-07-16 00:47:55.854170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eefd20 (9): Bad file descriptor 00:21:38.122 [2024-07-16 00:47:55.855167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:38.122 [2024-07-16 00:47:55.855191] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:38.122 [2024-07-16 00:47:55.855211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:38.122 request: 00:21:38.122 { 00:21:38.122 "name": "TLSTEST", 00:21:38.122 "trtype": "tcp", 00:21:38.122 "traddr": "10.0.0.2", 00:21:38.122 "adrfam": "ipv4", 00:21:38.122 "trsvcid": "4420", 00:21:38.122 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:38.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.122 "prchk_reftag": false, 00:21:38.122 "prchk_guard": false, 00:21:38.122 "hdgst": false, 00:21:38.122 "ddgst": false, 00:21:38.122 "psk": "/tmp/tmp.qMjX2YnxCF", 00:21:38.122 "method": "bdev_nvme_attach_controller", 00:21:38.122 "req_id": 1 00:21:38.122 } 00:21:38.122 Got JSON-RPC error response 00:21:38.122 response: 00:21:38.122 { 00:21:38.122 "code": -5, 00:21:38.122 "message": "Input/output error" 00:21:38.122 } 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3082862 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3082862 ']' 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3082862 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3082862 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3082862' 00:21:38.122 killing process with pid 3082862 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3082862 00:21:38.122 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.122 00:21:38.122 Latency(us) 00:21:38.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.122 =================================================================================================================== 00:21:38.122 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.122 [2024-07-16 00:47:55.938196] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:38.122 00:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3082862 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:38.381 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3083198 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3083198 /var/tmp/bdevperf.sock 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3083198 ']' 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.639 00:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.639 [2024-07-16 00:47:56.271945] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:38.639 [2024-07-16 00:47:56.272008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083198 ] 00:21:38.639 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.639 [2024-07-16 00:47:56.388076] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.898 [2024-07-16 00:47:56.531753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.465 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.465 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.465 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:39.724 [2024-07-16 00:47:57.466398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:39.724 [2024-07-16 00:47:57.467929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135f1f0 (9): Bad file descriptor 00:21:39.724 [2024-07-16 00:47:57.468922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.724 [2024-07-16 00:47:57.468948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:39.724 [2024-07-16 00:47:57.468969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.724 request: 00:21:39.724 { 00:21:39.724 "name": "TLSTEST", 00:21:39.724 "trtype": "tcp", 00:21:39.724 "traddr": "10.0.0.2", 00:21:39.724 "adrfam": "ipv4", 00:21:39.724 "trsvcid": "4420", 00:21:39.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.724 "prchk_reftag": false, 00:21:39.724 "prchk_guard": false, 00:21:39.724 "hdgst": false, 00:21:39.724 "ddgst": false, 00:21:39.724 "method": "bdev_nvme_attach_controller", 00:21:39.724 "req_id": 1 00:21:39.724 } 00:21:39.724 Got JSON-RPC error response 00:21:39.724 response: 00:21:39.724 { 00:21:39.724 "code": -5, 00:21:39.724 "message": "Input/output error" 00:21:39.724 } 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3083198 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3083198 ']' 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3083198 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.724 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3083198 00:21:39.725 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.725 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.725 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3083198' 00:21:39.725 killing process with pid 3083198 00:21:39.725 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3083198 00:21:39.725 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.725 00:21:39.725 Latency(us) 00:21:39.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.725 =================================================================================================================== 00:21:39.725 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.725 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3083198 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3077465 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3077465 ']' 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3077465 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3077465 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3077465' 00:21:40.292 killing process with pid 3077465 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3077465 00:21:40.292 [2024-07-16 00:47:57.890210] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.292 00:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3077465 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:40.292 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.O1tPXhzYW3 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.O1tPXhzYW3 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3083608 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3083608 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3083608 ']' 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.551 00:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.551 [2024-07-16 00:47:58.287160] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:40.551 [2024-07-16 00:47:58.287219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.551 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.551 [2024-07-16 00:47:58.374290] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.809 [2024-07-16 00:47:58.478866] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.809 [2024-07-16 00:47:58.478916] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.809 [2024-07-16 00:47:58.478929] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.809 [2024-07-16 00:47:58.478940] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.809 [2024-07-16 00:47:58.478949] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.809 [2024-07-16 00:47:58.478975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O1tPXhzYW3 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.746 [2024-07-16 00:47:59.488072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.746 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:42.005 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:42.264 [2024-07-16 00:47:59.965332] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.264 [2024-07-16 00:47:59.965558] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.264 00:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:42.523 malloc0 00:21:42.523 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.782 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:21:43.040 [2024-07-16 00:48:00.697552] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O1tPXhzYW3 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O1tPXhzYW3' 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3084059 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3084059 /var/tmp/bdevperf.sock 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3084059 ']' 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.040 00:48:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.040 [2024-07-16 00:48:00.754847] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:43.040 [2024-07-16 00:48:00.754905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084059 ] 00:21:43.040 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.040 [2024-07-16 00:48:00.870525] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.298 [2024-07-16 00:48:01.019536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.231 00:48:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.231 00:48:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:44.231 00:48:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:21:44.231 [2024-07-16 00:48:01.942730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.231 [2024-07-16 00:48:01.942885] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.231 TLSTESTn1 00:21:44.231 00:48:02 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:44.488 Running I/O for 10 seconds... 00:21:54.460 00:21:54.460 Latency(us) 00:21:54.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.460 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:54.460 Verification LBA range: start 0x0 length 0x2000 00:21:54.460 TLSTESTn1 : 10.02 2801.54 10.94 0.00 0.00 45569.52 10068.71 58386.62 00:21:54.460 =================================================================================================================== 00:21:54.460 Total : 2801.54 10.94 0.00 0.00 45569.52 10068.71 58386.62 00:21:54.460 0 00:21:54.460 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.460 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3084059 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3084059 ']' 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3084059 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3084059 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3084059' 00:21:54.461 killing process with pid 3084059 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3084059 00:21:54.461 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.461 00:21:54.461 Latency(us) 00:21:54.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.461 =================================================================================================================== 00:21:54.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.461 [2024-07-16 00:48:12.271694] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:54.461 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3084059 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.O1tPXhzYW3 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O1tPXhzYW3 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O1tPXhzYW3 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O1tPXhzYW3 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:55.028 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O1tPXhzYW3' 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3086520 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3086520 /var/tmp/bdevperf.sock 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3086520 ']' 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.029 00:48:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.029 [2024-07-16 00:48:12.672065] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:55.029 [2024-07-16 00:48:12.672129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086520 ] 00:21:55.029 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.029 [2024-07-16 00:48:12.788152] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.288 [2024-07-16 00:48:12.927758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.862 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.862 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.862 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:21:56.121 [2024-07-16 00:48:13.855726] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.121 [2024-07-16 00:48:13.855835] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:56.121 [2024-07-16 00:48:13.855856] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.O1tPXhzYW3 00:21:56.121 request: 00:21:56.121 { 00:21:56.121 "name": "TLSTEST", 00:21:56.121 "trtype": "tcp", 00:21:56.121 "traddr": "10.0.0.2", 00:21:56.121 "adrfam": "ipv4", 00:21:56.121 "trsvcid": "4420", 00:21:56.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.121 "prchk_reftag": false, 00:21:56.121 "prchk_guard": false, 00:21:56.121 "hdgst": false, 00:21:56.121 "ddgst": false, 00:21:56.121 "psk": "/tmp/tmp.O1tPXhzYW3", 00:21:56.121 "method": "bdev_nvme_attach_controller", 00:21:56.121 "req_id": 1 00:21:56.121 } 00:21:56.121 Got JSON-RPC error response 00:21:56.121 response: 00:21:56.121 { 00:21:56.121 "code": -1, 00:21:56.121 "message": "Operation not permitted" 00:21:56.121 } 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3086520 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3086520 ']' 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3086520 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3086520 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3086520' 00:21:56.121 killing process with pid 3086520 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3086520 00:21:56.121 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.121 00:21:56.121 Latency(us) 00:21:56.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.121 =================================================================================================================== 00:21:56.121 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.121 00:48:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3086520 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3083608 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3083608 ']' 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3083608 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3083608 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3083608' 00:21:56.775 killing process with pid 3083608 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3083608 00:21:56.775 [2024-07-16 00:48:14.319562] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.775 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3083608 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3086958 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3086958 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3086958 ']' 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.046 00:48:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.046 [2024-07-16 00:48:14.698074] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:57.046 [2024-07-16 00:48:14.698143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.046 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.046 [2024-07-16 00:48:14.788267] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.304 [2024-07-16 00:48:14.887536] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.304 [2024-07-16 00:48:14.887584] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.304 [2024-07-16 00:48:14.887597] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.304 [2024-07-16 00:48:14.887608] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.304 [2024-07-16 00:48:14.887617] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.304 [2024-07-16 00:48:14.887653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O1tPXhzYW3 00:21:57.871 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.130 [2024-07-16 00:48:15.919508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.130 00:48:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.388 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.955 [2024-07-16 00:48:16.637529] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.955 [2024-07-16 00:48:16.637813] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.955 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:59.213 malloc0 00:21:59.213 00:48:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.472 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:21:59.731 [2024-07-16 00:48:17.389914] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:59.731 [2024-07-16 00:48:17.389955] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:59.731 [2024-07-16 00:48:17.389998] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:59.731 request: 00:21:59.731 { 00:21:59.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.731 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.731 "psk": "/tmp/tmp.O1tPXhzYW3", 00:21:59.731 "method": "nvmf_subsystem_add_host", 00:21:59.731 "req_id": 1 00:21:59.731 } 00:21:59.731 Got JSON-RPC error response 00:21:59.731 response: 00:21:59.731 { 00:21:59.731 "code": -32603, 00:21:59.731 "message": "Internal error" 00:21:59.731 } 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3086958 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3086958 ']' 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3086958 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3086958 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3086958' 00:21:59.731 killing process with pid 3086958 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3086958 00:21:59.731 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3086958 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.O1tPXhzYW3 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3087544 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3087544 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3087544 ']' 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.990 00:48:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.990 [2024-07-16 00:48:17.759062] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:21:59.990 [2024-07-16 00:48:17.759123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.990 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.249 [2024-07-16 00:48:17.846505] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.249 [2024-07-16 00:48:17.948898] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.249 [2024-07-16 00:48:17.948945] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.249 [2024-07-16 00:48:17.948958] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.249 [2024-07-16 00:48:17.948969] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.249 [2024-07-16 00:48:17.948978] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.249 [2024-07-16 00:48:17.949003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O1tPXhzYW3 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:01.187 [2024-07-16 00:48:18.965085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.187 00:48:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:01.446 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.704 [2024-07-16 00:48:19.486510] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.704 [2024-07-16 00:48:19.486743] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.704 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.962 malloc0 00:22:01.963 00:48:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:02.221 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:22:02.479 [2024-07-16 00:48:20.274998] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3087904 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3087904 /var/tmp/bdevperf.sock 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3087904 ']' 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.479 00:48:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.738 [2024-07-16 00:48:20.358841] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:02.738 [2024-07-16 00:48:20.358908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087904 ] 00:22:02.738 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.738 [2024-07-16 00:48:20.475452] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.996 [2024-07-16 00:48:20.621866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.564 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.564 00:48:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:03.564 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:22:03.823 [2024-07-16 00:48:21.543936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.823 [2024-07-16 00:48:21.544088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.823 TLSTESTn1 00:22:03.823 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:04.391 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:04.391 "subsystems": [ 00:22:04.391 { 00:22:04.391 "subsystem": "keyring", 00:22:04.391 "config": [] 00:22:04.391 }, 00:22:04.391 { 00:22:04.391 "subsystem": "iobuf", 00:22:04.391 "config": [ 00:22:04.391 { 00:22:04.391 "method": "iobuf_set_options", 00:22:04.391 "params": { 00:22:04.391 "small_pool_count": 8192, 00:22:04.391 "large_pool_count": 1024, 00:22:04.391 "small_bufsize": 8192, 00:22:04.391 "large_bufsize": 135168 00:22:04.391 } 00:22:04.392 } 00:22:04.392 ] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "sock", 00:22:04.392 "config": [ 00:22:04.392 { 00:22:04.392 "method": "sock_set_default_impl", 00:22:04.392 "params": { 00:22:04.392 "impl_name": "posix" 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "sock_impl_set_options", 00:22:04.392 "params": { 00:22:04.392 "impl_name": "ssl", 00:22:04.392 "recv_buf_size": 4096, 00:22:04.392 "send_buf_size": 4096, 00:22:04.392 "enable_recv_pipe": true, 00:22:04.392 "enable_quickack": false, 00:22:04.392 "enable_placement_id": 0, 00:22:04.392 "enable_zerocopy_send_server": true, 00:22:04.392 "enable_zerocopy_send_client": false, 00:22:04.392 "zerocopy_threshold": 0, 00:22:04.392 "tls_version": 0, 00:22:04.392 "enable_ktls": false 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "sock_impl_set_options", 00:22:04.392 "params": { 00:22:04.392 "impl_name": "posix", 00:22:04.392 "recv_buf_size": 2097152, 00:22:04.392 "send_buf_size": 2097152, 00:22:04.392 "enable_recv_pipe": true, 00:22:04.392 "enable_quickack": false, 00:22:04.392 "enable_placement_id": 0, 00:22:04.392 "enable_zerocopy_send_server": true, 00:22:04.392 "enable_zerocopy_send_client": false, 00:22:04.392 "zerocopy_threshold": 0, 00:22:04.392 "tls_version": 0, 00:22:04.392 "enable_ktls": false 00:22:04.392 } 00:22:04.392 } 00:22:04.392 ] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "vmd", 00:22:04.392 "config": [] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "accel", 00:22:04.392 "config": [ 00:22:04.392 { 00:22:04.392 "method": "accel_set_options", 00:22:04.392 "params": { 00:22:04.392 "small_cache_size": 128, 00:22:04.392 "large_cache_size": 16, 00:22:04.392 "task_count": 2048, 00:22:04.392 "sequence_count": 2048, 00:22:04.392 "buf_count": 2048 00:22:04.392 } 00:22:04.392 } 00:22:04.392 ] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "bdev", 00:22:04.392 "config": [ 00:22:04.392 { 00:22:04.392 "method": "bdev_set_options", 00:22:04.392 "params": { 00:22:04.392 "bdev_io_pool_size": 65535, 00:22:04.392 "bdev_io_cache_size": 256, 00:22:04.392 "bdev_auto_examine": true, 00:22:04.392 "iobuf_small_cache_size": 128, 00:22:04.392 "iobuf_large_cache_size": 16 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_raid_set_options", 00:22:04.392 "params": { 00:22:04.392 "process_window_size_kb": 1024 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_iscsi_set_options", 00:22:04.392 "params": { 00:22:04.392 "timeout_sec": 30 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_nvme_set_options", 00:22:04.392 "params": { 00:22:04.392 "action_on_timeout": "none", 00:22:04.392 "timeout_us": 0, 00:22:04.392 "timeout_admin_us": 0, 00:22:04.392 "keep_alive_timeout_ms": 10000, 00:22:04.392 "arbitration_burst": 0, 00:22:04.392 "low_priority_weight": 0, 00:22:04.392 "medium_priority_weight": 0, 00:22:04.392 "high_priority_weight": 0, 00:22:04.392 "nvme_adminq_poll_period_us": 10000, 00:22:04.392 "nvme_ioq_poll_period_us": 0, 00:22:04.392 "io_queue_requests": 0, 00:22:04.392 "delay_cmd_submit": true, 00:22:04.392 "transport_retry_count": 4, 00:22:04.392 "bdev_retry_count": 3, 00:22:04.392 "transport_ack_timeout": 0, 00:22:04.392 "ctrlr_loss_timeout_sec": 0, 00:22:04.392 "reconnect_delay_sec": 0, 00:22:04.392 "fast_io_fail_timeout_sec": 0, 00:22:04.392 "disable_auto_failback": false, 00:22:04.392 "generate_uuids": false, 00:22:04.392 "transport_tos": 0, 00:22:04.392 "nvme_error_stat": false, 00:22:04.392 "rdma_srq_size": 0, 00:22:04.392 "io_path_stat": false, 00:22:04.392 "allow_accel_sequence": false, 00:22:04.392 "rdma_max_cq_size": 0, 00:22:04.392 "rdma_cm_event_timeout_ms": 0, 00:22:04.392 "dhchap_digests": [ 00:22:04.392 "sha256", 00:22:04.392 "sha384", 00:22:04.392 "sha512" 00:22:04.392 ], 00:22:04.392 "dhchap_dhgroups": [ 00:22:04.392 "null", 00:22:04.392 "ffdhe2048", 00:22:04.392 "ffdhe3072", 00:22:04.392 "ffdhe4096", 00:22:04.392 "ffdhe6144", 00:22:04.392 "ffdhe8192" 00:22:04.392 ] 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_nvme_set_hotplug", 00:22:04.392 "params": { 00:22:04.392 "period_us": 100000, 00:22:04.392 "enable": false 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_malloc_create", 00:22:04.392 "params": { 00:22:04.392 "name": "malloc0", 00:22:04.392 "num_blocks": 8192, 00:22:04.392 "block_size": 4096, 00:22:04.392 "physical_block_size": 4096, 00:22:04.392 "uuid": "0cf011fc-cdeb-48dc-8d06-bd647a47c425", 00:22:04.392 "optimal_io_boundary": 0 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "bdev_wait_for_examine" 00:22:04.392 } 00:22:04.392 ] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "nbd", 00:22:04.392 "config": [] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "scheduler", 00:22:04.392 "config": [ 00:22:04.392 { 00:22:04.392 "method": "framework_set_scheduler", 00:22:04.392 "params": { 00:22:04.392 "name": "static" 00:22:04.392 } 00:22:04.392 } 00:22:04.392 ] 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "subsystem": "nvmf", 00:22:04.392 "config": [ 00:22:04.392 { 00:22:04.392 "method": "nvmf_set_config", 00:22:04.392 "params": { 00:22:04.392 "discovery_filter": "match_any", 00:22:04.392 "admin_cmd_passthru": { 00:22:04.392 "identify_ctrlr": false 00:22:04.392 } 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_set_max_subsystems", 00:22:04.392 "params": { 00:22:04.392 "max_subsystems": 1024 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_set_crdt", 00:22:04.392 "params": { 00:22:04.392 "crdt1": 0, 00:22:04.392 "crdt2": 0, 00:22:04.392 "crdt3": 0 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_create_transport", 00:22:04.392 "params": { 00:22:04.392 "trtype": "TCP", 00:22:04.392 "max_queue_depth": 128, 00:22:04.392 "max_io_qpairs_per_ctrlr": 127, 00:22:04.392 "in_capsule_data_size": 4096, 00:22:04.392 "max_io_size": 131072, 00:22:04.392 "io_unit_size": 131072, 00:22:04.392 "max_aq_depth": 128, 00:22:04.392 "num_shared_buffers": 511, 00:22:04.392 "buf_cache_size": 4294967295, 00:22:04.392 "dif_insert_or_strip": false, 00:22:04.392 "zcopy": false, 00:22:04.392 "c2h_success": false, 00:22:04.392 "sock_priority": 0, 00:22:04.392 "abort_timeout_sec": 1, 00:22:04.392 "ack_timeout": 0, 00:22:04.392 "data_wr_pool_size": 0 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_create_subsystem", 00:22:04.392 "params": { 00:22:04.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.392 "allow_any_host": false, 00:22:04.392 "serial_number": "SPDK00000000000001", 00:22:04.392 "model_number": "SPDK bdev Controller", 00:22:04.392 "max_namespaces": 10, 00:22:04.392 "min_cntlid": 1, 00:22:04.392 "max_cntlid": 65519, 00:22:04.392 "ana_reporting": false 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_subsystem_add_host", 00:22:04.392 "params": { 00:22:04.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.392 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.392 "psk": "/tmp/tmp.O1tPXhzYW3" 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.392 "method": "nvmf_subsystem_add_ns", 00:22:04.392 "params": { 00:22:04.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.392 "namespace": { 00:22:04.392 "nsid": 1, 00:22:04.392 "bdev_name": "malloc0", 00:22:04.392 "nguid": "0CF011FCCDEB48DC8D06BD647A47C425", 00:22:04.392 "uuid": "0cf011fc-cdeb-48dc-8d06-bd647a47c425", 00:22:04.392 "no_auto_visible": false 00:22:04.392 } 00:22:04.392 } 00:22:04.392 }, 00:22:04.392 { 00:22:04.393 "method": "nvmf_subsystem_add_listener", 00:22:04.393 "params": { 00:22:04.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.393 "listen_address": { 00:22:04.393 "trtype": "TCP", 00:22:04.393 "adrfam": "IPv4", 00:22:04.393 "traddr": "10.0.0.2", 00:22:04.393 "trsvcid": "4420" 00:22:04.393 }, 00:22:04.393 "secure_channel": true 00:22:04.393 } 00:22:04.393 } 00:22:04.393 ] 00:22:04.393 } 00:22:04.393 ] 00:22:04.393 }' 00:22:04.393 00:48:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:04.651 00:48:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:04.651 "subsystems": [ 00:22:04.651 { 00:22:04.651 "subsystem": "keyring", 00:22:04.651 "config": [] 00:22:04.651 }, 00:22:04.651 { 00:22:04.651 "subsystem": "iobuf", 00:22:04.651 "config": [ 00:22:04.651 { 00:22:04.651 "method": "iobuf_set_options", 00:22:04.651 "params": { 00:22:04.651 "small_pool_count": 8192, 00:22:04.651 "large_pool_count": 1024, 00:22:04.651 "small_bufsize": 8192, 00:22:04.652 "large_bufsize": 135168 00:22:04.652 } 00:22:04.652 } 00:22:04.652 ] 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "subsystem": "sock", 00:22:04.652 "config": [ 00:22:04.652 { 00:22:04.652 "method": "sock_set_default_impl", 00:22:04.652 "params": { 00:22:04.652 "impl_name": "posix" 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "sock_impl_set_options", 00:22:04.652 "params": { 00:22:04.652 "impl_name": "ssl", 00:22:04.652 "recv_buf_size": 4096, 00:22:04.652 "send_buf_size": 4096, 00:22:04.652 "enable_recv_pipe": true, 00:22:04.652 "enable_quickack": false, 00:22:04.652 "enable_placement_id": 0, 00:22:04.652 "enable_zerocopy_send_server": true, 00:22:04.652 "enable_zerocopy_send_client": false, 00:22:04.652 "zerocopy_threshold": 0, 00:22:04.652 "tls_version": 0, 00:22:04.652 "enable_ktls": false 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "sock_impl_set_options", 00:22:04.652 "params": { 00:22:04.652 "impl_name": "posix", 00:22:04.652 "recv_buf_size": 2097152, 00:22:04.652 "send_buf_size": 2097152, 00:22:04.652 "enable_recv_pipe": true, 00:22:04.652 "enable_quickack": false, 00:22:04.652 "enable_placement_id": 0, 00:22:04.652 "enable_zerocopy_send_server": true, 00:22:04.652 "enable_zerocopy_send_client": false, 00:22:04.652 "zerocopy_threshold": 0, 00:22:04.652 "tls_version": 0, 00:22:04.652 "enable_ktls": false 00:22:04.652 } 00:22:04.652 } 00:22:04.652 ] 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "subsystem": "vmd", 00:22:04.652 "config": [] 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "subsystem": "accel", 00:22:04.652 "config": [ 00:22:04.652 { 00:22:04.652 "method": "accel_set_options", 00:22:04.652 "params": { 00:22:04.652 "small_cache_size": 128, 00:22:04.652 "large_cache_size": 16, 00:22:04.652 "task_count": 2048, 00:22:04.652 "sequence_count": 2048, 00:22:04.652 "buf_count": 2048 00:22:04.652 } 00:22:04.652 } 00:22:04.652 ] 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "subsystem": "bdev", 00:22:04.652 "config": [ 00:22:04.652 { 00:22:04.652 "method": "bdev_set_options", 00:22:04.652 "params": { 00:22:04.652 "bdev_io_pool_size": 65535, 00:22:04.652 "bdev_io_cache_size": 256, 00:22:04.652 "bdev_auto_examine": true, 00:22:04.652 "iobuf_small_cache_size": 128, 00:22:04.652 "iobuf_large_cache_size": 16 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_raid_set_options", 00:22:04.652 "params": { 00:22:04.652 "process_window_size_kb": 1024 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_iscsi_set_options", 00:22:04.652 "params": { 00:22:04.652 "timeout_sec": 30 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_nvme_set_options", 00:22:04.652 "params": { 00:22:04.652 "action_on_timeout": "none", 00:22:04.652 "timeout_us": 0, 00:22:04.652 "timeout_admin_us": 0, 00:22:04.652 "keep_alive_timeout_ms": 10000, 00:22:04.652 "arbitration_burst": 0, 00:22:04.652 "low_priority_weight": 0, 00:22:04.652 "medium_priority_weight": 0, 00:22:04.652 "high_priority_weight": 0, 00:22:04.652 "nvme_adminq_poll_period_us": 10000, 00:22:04.652 "nvme_ioq_poll_period_us": 0, 00:22:04.652 "io_queue_requests": 512, 00:22:04.652 "delay_cmd_submit": true, 00:22:04.652 "transport_retry_count": 4, 00:22:04.652 "bdev_retry_count": 3, 00:22:04.652 "transport_ack_timeout": 0, 00:22:04.652 "ctrlr_loss_timeout_sec": 0, 00:22:04.652 "reconnect_delay_sec": 0, 00:22:04.652 "fast_io_fail_timeout_sec": 0, 00:22:04.652 "disable_auto_failback": false, 00:22:04.652 "generate_uuids": false, 00:22:04.652 "transport_tos": 0, 00:22:04.652 "nvme_error_stat": false, 00:22:04.652 "rdma_srq_size": 0, 00:22:04.652 "io_path_stat": false, 00:22:04.652 "allow_accel_sequence": false, 00:22:04.652 "rdma_max_cq_size": 0, 00:22:04.652 "rdma_cm_event_timeout_ms": 0, 00:22:04.652 "dhchap_digests": [ 00:22:04.652 "sha256", 00:22:04.652 "sha384", 00:22:04.652 "sha512" 00:22:04.652 ], 00:22:04.652 "dhchap_dhgroups": [ 00:22:04.652 "null", 00:22:04.652 "ffdhe2048", 00:22:04.652 "ffdhe3072", 00:22:04.652 "ffdhe4096", 00:22:04.652 "ffdhe6144", 00:22:04.652 "ffdhe8192" 00:22:04.652 ] 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_nvme_attach_controller", 00:22:04.652 "params": { 00:22:04.652 "name": "TLSTEST", 00:22:04.652 "trtype": "TCP", 00:22:04.652 "adrfam": "IPv4", 00:22:04.652 "traddr": "10.0.0.2", 00:22:04.652 "trsvcid": "4420", 00:22:04.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.652 "prchk_reftag": false, 00:22:04.652 "prchk_guard": false, 00:22:04.652 "ctrlr_loss_timeout_sec": 0, 00:22:04.652 "reconnect_delay_sec": 0, 00:22:04.652 "fast_io_fail_timeout_sec": 0, 00:22:04.652 "psk": "/tmp/tmp.O1tPXhzYW3", 00:22:04.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.652 "hdgst": false, 00:22:04.652 "ddgst": false 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_nvme_set_hotplug", 00:22:04.652 "params": { 00:22:04.652 "period_us": 100000, 00:22:04.652 "enable": false 00:22:04.652 } 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "method": "bdev_wait_for_examine" 00:22:04.652 } 00:22:04.652 ] 00:22:04.652 }, 00:22:04.652 { 00:22:04.652 "subsystem": "nbd", 00:22:04.652 "config": [] 00:22:04.652 } 00:22:04.652 ] 00:22:04.652 }' 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3087904 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3087904 ']' 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3087904 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3087904 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3087904' 00:22:04.652 killing process with pid 3087904 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3087904 00:22:04.652 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.652 00:22:04.652 Latency(us) 00:22:04.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.652 =================================================================================================================== 00:22:04.652 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.652 [2024-07-16 00:48:22.363456] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.652 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3087904 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3087544 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3087544 ']' 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3087544 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.911 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3087544 00:22:05.169 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:05.170 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:05.170 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3087544' 00:22:05.170 killing process with pid 3087544 00:22:05.170 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3087544 00:22:05.170 [2024-07-16 00:48:22.771293] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.170 00:48:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3087544 00:22:05.428 00:48:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:05.428 00:48:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.428 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.428 00:48:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:05.428 "subsystems": [ 00:22:05.428 { 00:22:05.428 "subsystem": "keyring", 00:22:05.428 "config": [] 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "subsystem": "iobuf", 00:22:05.428 "config": [ 00:22:05.428 { 00:22:05.428 "method": "iobuf_set_options", 00:22:05.428 "params": { 00:22:05.428 "small_pool_count": 8192, 00:22:05.428 "large_pool_count": 1024, 00:22:05.428 "small_bufsize": 8192, 00:22:05.428 "large_bufsize": 135168 00:22:05.428 } 00:22:05.428 } 00:22:05.428 ] 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "subsystem": "sock", 00:22:05.428 "config": [ 00:22:05.428 { 00:22:05.428 "method": "sock_set_default_impl", 00:22:05.428 "params": { 00:22:05.428 "impl_name": "posix" 00:22:05.428 } 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "method": "sock_impl_set_options", 00:22:05.428 "params": { 00:22:05.428 "impl_name": "ssl", 00:22:05.428 "recv_buf_size": 4096, 00:22:05.428 "send_buf_size": 4096, 00:22:05.428 "enable_recv_pipe": true, 00:22:05.428 "enable_quickack": false, 00:22:05.428 "enable_placement_id": 0, 00:22:05.428 "enable_zerocopy_send_server": true, 00:22:05.428 "enable_zerocopy_send_client": false, 00:22:05.428 "zerocopy_threshold": 0, 00:22:05.428 "tls_version": 0, 00:22:05.428 "enable_ktls": false 00:22:05.428 } 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "method": "sock_impl_set_options", 00:22:05.428 "params": { 00:22:05.428 "impl_name": "posix", 00:22:05.428 "recv_buf_size": 2097152, 00:22:05.428 "send_buf_size": 2097152, 00:22:05.428 "enable_recv_pipe": true, 00:22:05.428 "enable_quickack": false, 00:22:05.428 "enable_placement_id": 0, 00:22:05.428 "enable_zerocopy_send_server": true, 00:22:05.428 "enable_zerocopy_send_client": false, 00:22:05.428 "zerocopy_threshold": 0, 00:22:05.428 "tls_version": 0, 00:22:05.428 "enable_ktls": false 00:22:05.428 } 00:22:05.428 } 00:22:05.428 ] 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "subsystem": "vmd", 00:22:05.428 "config": [] 00:22:05.428 }, 00:22:05.428 { 00:22:05.428 "subsystem": "accel", 00:22:05.428 "config": [ 00:22:05.428 { 00:22:05.428 "method": "accel_set_options", 00:22:05.428 "params": { 00:22:05.428 "small_cache_size": 128, 00:22:05.428 "large_cache_size": 16, 00:22:05.429 "task_count": 2048, 00:22:05.429 "sequence_count": 2048, 00:22:05.429 "buf_count": 2048 00:22:05.429 } 00:22:05.429 } 00:22:05.429 ] 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "subsystem": "bdev", 00:22:05.429 "config": [ 00:22:05.429 { 00:22:05.429 "method": "bdev_set_options", 00:22:05.429 "params": { 00:22:05.429 "bdev_io_pool_size": 65535, 00:22:05.429 "bdev_io_cache_size": 256, 00:22:05.429 "bdev_auto_examine": true, 00:22:05.429 "iobuf_small_cache_size": 128, 00:22:05.429 "iobuf_large_cache_size": 16 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_raid_set_options", 00:22:05.429 "params": { 00:22:05.429 "process_window_size_kb": 1024 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_iscsi_set_options", 00:22:05.429 "params": { 00:22:05.429 "timeout_sec": 30 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_nvme_set_options", 00:22:05.429 "params": { 00:22:05.429 "action_on_timeout": "none", 00:22:05.429 "timeout_us": 0, 00:22:05.429 "timeout_admin_us": 0, 00:22:05.429 "keep_alive_timeout_ms": 10000, 00:22:05.429 "arbitration_burst": 0, 00:22:05.429 "low_priority_weight": 0, 00:22:05.429 "medium_priority_weight": 0, 00:22:05.429 "high_priority_weight": 0, 00:22:05.429 "nvme_adminq_poll_period_us": 10000, 00:22:05.429 "nvme_ioq_poll_period_us": 0, 00:22:05.429 "io_queue_requests": 0, 00:22:05.429 "delay_cmd_submit": true, 00:22:05.429 "transport_retry_count": 4, 00:22:05.429 "bdev_retry_count": 3, 00:22:05.429 "transport_ack_timeout": 0, 00:22:05.429 "ctrlr_loss_timeout_sec": 0, 00:22:05.429 "reconnect_delay_sec": 0, 00:22:05.429 "fast_io_fail_timeout_sec": 0, 00:22:05.429 "disable_auto_failback": false, 00:22:05.429 "generate_uuids": false, 00:22:05.429 "transport_tos": 0, 00:22:05.429 "nvme_error_stat": false, 00:22:05.429 "rdma_srq_size": 0, 00:22:05.429 "io_path_stat": false, 00:22:05.429 "allow_accel_sequence": false, 00:22:05.429 "rdma_max_cq_size": 0, 00:22:05.429 "rdma_cm_event_timeout_ms": 0, 00:22:05.429 "dhchap_digests": [ 00:22:05.429 "sha256", 00:22:05.429 "sha384", 00:22:05.429 "sha512" 00:22:05.429 ], 00:22:05.429 "dhchap_dhgroups": [ 00:22:05.429 "null", 00:22:05.429 "ffdhe2048", 00:22:05.429 "ffdhe3072", 00:22:05.429 "ffdhe4096", 00:22:05.429 "ffdhe6144", 00:22:05.429 "ffdhe8192" 00:22:05.429 ] 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_nvme_set_hotplug", 00:22:05.429 "params": { 00:22:05.429 "period_us": 100000, 00:22:05.429 "enable": false 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_malloc_create", 00:22:05.429 "params": { 00:22:05.429 "name": "malloc0", 00:22:05.429 "num_blocks": 8192, 00:22:05.429 "block_size": 4096, 00:22:05.429 "physical_block_size": 4096, 00:22:05.429 "uuid": "0cf011fc-cdeb-48dc-8d06-bd647a47c425", 00:22:05.429 "optimal_io_boundary": 0 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "bdev_wait_for_examine" 00:22:05.429 } 00:22:05.429 ] 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "subsystem": "nbd", 00:22:05.429 "config": [] 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "subsystem": "scheduler", 00:22:05.429 "config": [ 00:22:05.429 { 00:22:05.429 "method": "framework_set_scheduler", 00:22:05.429 "params": { 00:22:05.429 "name": "static" 00:22:05.429 } 00:22:05.429 } 00:22:05.429 ] 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "subsystem": "nvmf", 00:22:05.429 "config": [ 00:22:05.429 { 00:22:05.429 "method": "nvmf_set_config", 00:22:05.429 "params": { 00:22:05.429 "discovery_filter": "match_any", 00:22:05.429 "admin_cmd_passthru": { 00:22:05.429 "identify_ctrlr": false 00:22:05.429 } 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_set_max_subsystems", 00:22:05.429 "params": { 00:22:05.429 "max_subsystems": 1024 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_set_crdt", 00:22:05.429 "params": { 00:22:05.429 "crdt1": 0, 00:22:05.429 "crdt2": 0, 00:22:05.429 "crdt3": 0 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_create_transport", 00:22:05.429 "params": { 00:22:05.429 "trtype": "TCP", 00:22:05.429 "max_queue_depth": 128, 00:22:05.429 "max_io_qpairs_per_ctrlr": 127, 00:22:05.429 "in_capsule_data_size": 4096, 00:22:05.429 "max_io_size": 131072, 00:22:05.429 "io_unit_size": 131072, 00:22:05.429 "max_aq_depth": 128, 00:22:05.429 "num_shared_buffers": 511, 00:22:05.429 "buf_cache_size": 4294967295, 00:22:05.429 "dif_insert_or_strip": false, 00:22:05.429 "zcopy": false, 00:22:05.429 "c2h_success": false, 00:22:05.429 "sock_priority": 0, 00:22:05.429 "abort_timeout_sec": 1, 00:22:05.429 "ack_timeout": 0, 00:22:05.429 "data_wr_pool_size": 0 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_create_subsystem", 00:22:05.429 "params": { 00:22:05.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.429 "allow_any_host": false, 00:22:05.429 "serial_number": "SPDK00000000000001", 00:22:05.429 "model_number": "SPDK bdev Controller", 00:22:05.429 "max_namespaces": 10, 00:22:05.429 "min_cntlid": 1, 00:22:05.429 "max_cntlid": 65519, 00:22:05.429 "ana_reporting": false 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_subsystem_add_host", 00:22:05.429 "params": { 00:22:05.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.429 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.429 "psk": "/tmp/tmp.O1tPXhzYW3" 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_subsystem_add_ns", 00:22:05.429 "params": { 00:22:05.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.429 "namespace": { 00:22:05.429 "nsid": 1, 00:22:05.429 "bdev_name": "malloc0", 00:22:05.429 "nguid": "0CF011FCCDEB48DC8D06BD647A47C425", 00:22:05.429 "uuid": "0cf011fc-cdeb-48dc-8d06-bd647a47c425", 00:22:05.429 "no_auto_visible": false 00:22:05.429 } 00:22:05.429 } 00:22:05.429 }, 00:22:05.429 { 00:22:05.429 "method": "nvmf_subsystem_add_listener", 00:22:05.429 "params": { 00:22:05.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.429 "listen_address": { 00:22:05.429 "trtype": "TCP", 00:22:05.429 "adrfam": "IPv4", 00:22:05.429 "traddr": "10.0.0.2", 00:22:05.429 "trsvcid": "4420" 00:22:05.429 }, 00:22:05.429 "secure_channel": true 00:22:05.429 } 00:22:05.429 } 00:22:05.429 ] 00:22:05.429 } 00:22:05.429 ] 00:22:05.429 }' 00:22:05.429 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.429 00:48:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3088446 00:22:05.429 00:48:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:05.429 00:48:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3088446 00:22:05.429 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3088446 ']' 00:22:05.430 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.430 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.430 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.430 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.430 00:48:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.430 [2024-07-16 00:48:23.144803] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:05.430 [2024-07-16 00:48:23.144865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.430 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.430 [2024-07-16 00:48:23.233519] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.686 [2024-07-16 00:48:23.336109] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.686 [2024-07-16 00:48:23.336154] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.686 [2024-07-16 00:48:23.336167] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.686 [2024-07-16 00:48:23.336178] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.686 [2024-07-16 00:48:23.336188] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.686 [2024-07-16 00:48:23.336264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.943 [2024-07-16 00:48:23.554671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.943 [2024-07-16 00:48:23.570574] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.943 [2024-07-16 00:48:23.586649] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.943 [2024-07-16 00:48:23.598477] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3088720 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3088720 /var/tmp/bdevperf.sock 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3088720 ']' 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:06.509 "subsystems": [ 00:22:06.509 { 00:22:06.509 "subsystem": "keyring", 00:22:06.509 "config": [] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "iobuf", 00:22:06.509 "config": [ 00:22:06.509 { 00:22:06.509 "method": "iobuf_set_options", 00:22:06.509 "params": { 00:22:06.509 "small_pool_count": 8192, 00:22:06.509 "large_pool_count": 1024, 00:22:06.509 "small_bufsize": 8192, 00:22:06.509 "large_bufsize": 135168 00:22:06.509 } 00:22:06.509 } 00:22:06.509 ] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "sock", 00:22:06.509 "config": [ 00:22:06.509 { 00:22:06.509 "method": "sock_set_default_impl", 00:22:06.509 "params": { 00:22:06.509 "impl_name": "posix" 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "sock_impl_set_options", 00:22:06.509 "params": { 00:22:06.509 "impl_name": "ssl", 00:22:06.509 "recv_buf_size": 4096, 00:22:06.509 "send_buf_size": 4096, 00:22:06.509 "enable_recv_pipe": true, 00:22:06.509 "enable_quickack": false, 00:22:06.509 "enable_placement_id": 0, 00:22:06.509 "enable_zerocopy_send_server": true, 00:22:06.509 "enable_zerocopy_send_client": false, 00:22:06.509 "zerocopy_threshold": 0, 00:22:06.509 "tls_version": 0, 00:22:06.509 "enable_ktls": false 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "sock_impl_set_options", 00:22:06.509 "params": { 00:22:06.509 "impl_name": "posix", 00:22:06.509 "recv_buf_size": 2097152, 00:22:06.509 "send_buf_size": 2097152, 00:22:06.509 "enable_recv_pipe": true, 00:22:06.509 "enable_quickack": false, 00:22:06.509 "enable_placement_id": 0, 00:22:06.509 "enable_zerocopy_send_server": true, 00:22:06.509 "enable_zerocopy_send_client": false, 00:22:06.509 "zerocopy_threshold": 0, 00:22:06.509 "tls_version": 0, 00:22:06.509 "enable_ktls": false 00:22:06.509 } 00:22:06.509 } 00:22:06.509 ] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "vmd", 00:22:06.509 "config": [] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "accel", 00:22:06.509 "config": [ 00:22:06.509 { 00:22:06.509 "method": "accel_set_options", 00:22:06.509 "params": { 00:22:06.509 "small_cache_size": 128, 00:22:06.509 "large_cache_size": 16, 00:22:06.509 "task_count": 2048, 00:22:06.509 "sequence_count": 2048, 00:22:06.509 "buf_count": 2048 00:22:06.509 } 00:22:06.509 } 00:22:06.509 ] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "bdev", 00:22:06.509 "config": [ 00:22:06.509 { 00:22:06.509 "method": "bdev_set_options", 00:22:06.509 "params": { 00:22:06.509 "bdev_io_pool_size": 65535, 00:22:06.509 "bdev_io_cache_size": 256, 00:22:06.509 "bdev_auto_examine": true, 00:22:06.509 "iobuf_small_cache_size": 128, 00:22:06.509 "iobuf_large_cache_size": 16 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_raid_set_options", 00:22:06.509 "params": { 00:22:06.509 "process_window_size_kb": 1024 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_iscsi_set_options", 00:22:06.509 "params": { 00:22:06.509 "timeout_sec": 30 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_nvme_set_options", 00:22:06.509 "params": { 00:22:06.509 "action_on_timeout": "none", 00:22:06.509 "timeout_us": 0, 00:22:06.509 "timeout_admin_us": 0, 00:22:06.509 "keep_alive_timeout_ms": 10000, 00:22:06.509 "arbitration_burst": 0, 00:22:06.509 "low_priority_weight": 0, 00:22:06.509 "medium_priority_weight": 0, 00:22:06.509 "high_priority_weight": 0, 00:22:06.509 "nvme_adminq_poll_period_us": 10000, 00:22:06.509 "nvme_ioq_poll_period_us": 0, 00:22:06.509 "io_queue_requests": 512, 00:22:06.509 "delay_cmd_submit": true, 00:22:06.509 "transport_retry_count": 4, 00:22:06.509 "bdev_retry_count": 3, 00:22:06.509 "transport_ack_timeout": 0, 00:22:06.509 "ctrlr_loss_timeout_sec": 0, 00:22:06.509 "reconnect_delay_sec": 0, 00:22:06.509 "fast_io_fail_timeout_sec": 0, 00:22:06.509 "disable_auto_failback": false, 00:22:06.509 "generate_uuids": false, 00:22:06.509 "transport_tos": 0, 00:22:06.509 "nvme_error_stat": false, 00:22:06.509 "rdma_srq_size": 0, 00:22:06.509 "io_path_stat": false, 00:22:06.509 "allow_accel_sequence": false, 00:22:06.509 "rdma_max_cq_size": 0, 00:22:06.509 "rdma_cm_event_timeout_ms": 0, 00:22:06.509 "dhchap_digests": [ 00:22:06.509 "sha256", 00:22:06.509 "sha384", 00:22:06.509 "sha512" 00:22:06.509 ], 00:22:06.509 "dhchap_dhgroups": [ 00:22:06.509 "null", 00:22:06.509 "ffdhe2048", 00:22:06.509 "ffdhe3072", 00:22:06.509 "ffdhe4096", 00:22:06.509 "ffdhe6144", 00:22:06.509 "ffdhe8192" 00:22:06.509 ] 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_nvme_attach_controller", 00:22:06.509 "params": { 00:22:06.509 "name": "TLSTEST", 00:22:06.509 "trtype": "TCP", 00:22:06.509 "adrfam": "IPv4", 00:22:06.509 "traddr": "10.0.0.2", 00:22:06.509 "trsvcid": "4420", 00:22:06.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.509 "prchk_reftag": false, 00:22:06.509 "prchk_guard": false, 00:22:06.509 "ctrlr_loss_timeout_sec": 0, 00:22:06.509 "reconnect_delay_sec": 0, 00:22:06.509 "fast_io_fail_timeout_sec": 0, 00:22:06.509 "psk": "/tmp/tmp.O1tPXhzYW3", 00:22:06.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.509 "hdgst": false, 00:22:06.509 "ddgst": false 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_nvme_set_hotplug", 00:22:06.509 "params": { 00:22:06.509 "period_us": 100000, 00:22:06.509 "enable": false 00:22:06.509 } 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "method": "bdev_wait_for_examine" 00:22:06.509 } 00:22:06.509 ] 00:22:06.509 }, 00:22:06.509 { 00:22:06.509 "subsystem": "nbd", 00:22:06.509 "config": [] 00:22:06.509 } 00:22:06.509 ] 00:22:06.509 }' 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.509 00:48:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.509 [2024-07-16 00:48:24.169076] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:06.509 [2024-07-16 00:48:24.169139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088720 ] 00:22:06.509 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.509 [2024-07-16 00:48:24.284895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.767 [2024-07-16 00:48:24.430006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.025 [2024-07-16 00:48:24.639043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.026 [2024-07-16 00:48:24.639204] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.591 00:48:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.591 00:48:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:07.591 00:48:25 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:07.591 Running I/O for 10 seconds... 00:22:17.568 00:22:17.568 Latency(us) 00:22:17.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.568 Verification LBA range: start 0x0 length 0x2000 00:22:17.568 TLSTESTn1 : 10.03 2147.60 8.39 0.00 0.00 59453.91 11200.70 51952.17 00:22:17.568 =================================================================================================================== 00:22:17.568 Total : 2147.60 8.39 0.00 0.00 59453.91 11200.70 51952.17 00:22:17.568 0 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3088720 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3088720 ']' 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3088720 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3088720 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3088720' 00:22:17.568 killing process with pid 3088720 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3088720 00:22:17.568 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.568 00:22:17.568 Latency(us) 00:22:17.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.568 =================================================================================================================== 00:22:17.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.568 [2024-07-16 00:48:35.371572] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:17.568 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3088720 00:22:17.827 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3088446 00:22:17.827 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3088446 ']' 00:22:17.827 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3088446 00:22:17.827 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3088446 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3088446' 00:22:18.086 killing process with pid 3088446 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3088446 00:22:18.086 [2024-07-16 00:48:35.714065] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.086 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3088446 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3090738 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3090738 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3090738 ']' 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.345 00:48:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.345 [2024-07-16 00:48:36.010488] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:18.345 [2024-07-16 00:48:36.010553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.345 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.345 [2024-07-16 00:48:36.096870] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.604 [2024-07-16 00:48:36.186710] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.604 [2024-07-16 00:48:36.186755] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.604 [2024-07-16 00:48:36.186765] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.604 [2024-07-16 00:48:36.186774] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.604 [2024-07-16 00:48:36.186781] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.604 [2024-07-16 00:48:36.186811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.O1tPXhzYW3 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O1tPXhzYW3 00:22:19.172 00:48:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:19.430 [2024-07-16 00:48:37.209618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.430 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.690 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.949 [2024-07-16 00:48:37.706935] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.949 [2024-07-16 00:48:37.707139] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.949 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:20.207 malloc0 00:22:20.207 00:48:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:20.466 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O1tPXhzYW3 00:22:20.726 [2024-07-16 00:48:38.446133] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3091123 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3091123 /var/tmp/bdevperf.sock 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3091123 ']' 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.726 00:48:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.726 [2024-07-16 00:48:38.511220] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:20.726 [2024-07-16 00:48:38.511290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091123 ] 00:22:20.726 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.985 [2024-07-16 00:48:38.595665] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.985 [2024-07-16 00:48:38.700508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.922 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:21.922 00:48:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:21.922 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O1tPXhzYW3 00:22:21.922 00:48:39 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:22.181 [2024-07-16 00:48:39.957096] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.440 nvme0n1 00:22:22.440 00:48:40 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:22.698 Running I/O for 1 seconds... 00:22:23.634 00:22:23.634 Latency(us) 00:22:23.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.634 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:23.634 Verification LBA range: start 0x0 length 0x2000 00:22:23.634 nvme0n1 : 1.02 3576.76 13.97 0.00 0.00 35402.67 9949.56 57671.68 00:22:23.634 =================================================================================================================== 00:22:23.634 Total : 3576.76 13.97 0.00 0.00 35402.67 9949.56 57671.68 00:22:23.634 0 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3091123 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3091123 ']' 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3091123 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3091123 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3091123' 00:22:23.634 killing process with pid 3091123 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3091123 00:22:23.634 Received shutdown signal, test time was about 1.000000 seconds 00:22:23.634 00:22:23.634 Latency(us) 00:22:23.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.634 =================================================================================================================== 00:22:23.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.634 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3091123 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3090738 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3090738 ']' 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3090738 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090738 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090738' 00:22:23.893 killing process with pid 3090738 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3090738 00:22:23.893 [2024-07-16 00:48:41.651019] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:23.893 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3090738 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3091781 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3091781 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3091781 ']' 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.153 00:48:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.153 [2024-07-16 00:48:41.928640] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:24.153 [2024-07-16 00:48:41.928704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.153 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.412 [2024-07-16 00:48:42.015515] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.412 [2024-07-16 00:48:42.104066] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.412 [2024-07-16 00:48:42.104115] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.412 [2024-07-16 00:48:42.104124] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.412 [2024-07-16 00:48:42.104133] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.412 [2024-07-16 00:48:42.104140] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.412 [2024-07-16 00:48:42.104163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.349 [2024-07-16 00:48:42.909630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.349 malloc0 00:22:25.349 [2024-07-16 00:48:42.938959] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.349 [2024-07-16 00:48:42.939168] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3091942 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3091942 /var/tmp/bdevperf.sock 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3091942 ']' 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.349 00:48:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.349 [2024-07-16 00:48:43.013721] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:25.349 [2024-07-16 00:48:43.013763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091942 ] 00:22:25.349 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.349 [2024-07-16 00:48:43.085091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.613 [2024-07-16 00:48:43.193213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.613 00:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.613 00:48:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:25.613 00:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.O1tPXhzYW3 00:22:25.874 00:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:26.132 [2024-07-16 00:48:43.780980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.132 nvme0n1 00:22:26.132 00:48:43 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.391 Running I/O for 1 seconds... 00:22:27.326 00:22:27.326 Latency(us) 00:22:27.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.326 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:27.326 Verification LBA range: start 0x0 length 0x2000 00:22:27.326 nvme0n1 : 1.02 3636.84 14.21 0.00 0.00 34788.04 8996.31 38368.35 00:22:27.326 =================================================================================================================== 00:22:27.326 Total : 3636.84 14.21 0.00 0.00 34788.04 8996.31 38368.35 00:22:27.326 0 00:22:27.326 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:27.326 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.326 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.326 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.326 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:27.326 "subsystems": [ 00:22:27.326 { 00:22:27.326 "subsystem": "keyring", 00:22:27.326 "config": [ 00:22:27.326 { 00:22:27.326 "method": "keyring_file_add_key", 00:22:27.326 "params": { 00:22:27.326 "name": "key0", 00:22:27.326 "path": "/tmp/tmp.O1tPXhzYW3" 00:22:27.326 } 00:22:27.326 } 00:22:27.326 ] 00:22:27.326 }, 00:22:27.326 { 00:22:27.326 "subsystem": "iobuf", 00:22:27.326 "config": [ 00:22:27.326 { 00:22:27.327 "method": "iobuf_set_options", 00:22:27.327 "params": { 00:22:27.327 "small_pool_count": 8192, 00:22:27.327 "large_pool_count": 1024, 00:22:27.327 "small_bufsize": 8192, 00:22:27.327 "large_bufsize": 135168 00:22:27.327 } 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "sock", 00:22:27.327 "config": [ 00:22:27.327 { 00:22:27.327 "method": "sock_set_default_impl", 00:22:27.327 "params": { 00:22:27.327 "impl_name": "posix" 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "sock_impl_set_options", 00:22:27.327 "params": { 00:22:27.327 "impl_name": "ssl", 00:22:27.327 "recv_buf_size": 4096, 00:22:27.327 "send_buf_size": 4096, 00:22:27.327 "enable_recv_pipe": true, 00:22:27.327 "enable_quickack": false, 00:22:27.327 "enable_placement_id": 0, 00:22:27.327 "enable_zerocopy_send_server": true, 00:22:27.327 "enable_zerocopy_send_client": false, 00:22:27.327 "zerocopy_threshold": 0, 00:22:27.327 "tls_version": 0, 00:22:27.327 "enable_ktls": false 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "sock_impl_set_options", 00:22:27.327 "params": { 00:22:27.327 "impl_name": "posix", 00:22:27.327 "recv_buf_size": 2097152, 00:22:27.327 "send_buf_size": 2097152, 00:22:27.327 "enable_recv_pipe": true, 00:22:27.327 "enable_quickack": false, 00:22:27.327 "enable_placement_id": 0, 00:22:27.327 "enable_zerocopy_send_server": true, 00:22:27.327 "enable_zerocopy_send_client": false, 00:22:27.327 "zerocopy_threshold": 0, 00:22:27.327 "tls_version": 0, 00:22:27.327 "enable_ktls": false 00:22:27.327 } 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "vmd", 00:22:27.327 "config": [] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "accel", 00:22:27.327 "config": [ 00:22:27.327 { 00:22:27.327 "method": "accel_set_options", 00:22:27.327 "params": { 00:22:27.327 "small_cache_size": 128, 00:22:27.327 "large_cache_size": 16, 00:22:27.327 "task_count": 2048, 00:22:27.327 "sequence_count": 2048, 00:22:27.327 "buf_count": 2048 00:22:27.327 } 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "bdev", 00:22:27.327 "config": [ 00:22:27.327 { 00:22:27.327 "method": "bdev_set_options", 00:22:27.327 "params": { 00:22:27.327 "bdev_io_pool_size": 65535, 00:22:27.327 "bdev_io_cache_size": 256, 00:22:27.327 "bdev_auto_examine": true, 00:22:27.327 "iobuf_small_cache_size": 128, 00:22:27.327 "iobuf_large_cache_size": 16 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_raid_set_options", 00:22:27.327 "params": { 00:22:27.327 "process_window_size_kb": 1024 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_iscsi_set_options", 00:22:27.327 "params": { 00:22:27.327 "timeout_sec": 30 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_nvme_set_options", 00:22:27.327 "params": { 00:22:27.327 "action_on_timeout": "none", 00:22:27.327 "timeout_us": 0, 00:22:27.327 "timeout_admin_us": 0, 00:22:27.327 "keep_alive_timeout_ms": 10000, 00:22:27.327 "arbitration_burst": 0, 00:22:27.327 "low_priority_weight": 0, 00:22:27.327 "medium_priority_weight": 0, 00:22:27.327 "high_priority_weight": 0, 00:22:27.327 "nvme_adminq_poll_period_us": 10000, 00:22:27.327 "nvme_ioq_poll_period_us": 0, 00:22:27.327 "io_queue_requests": 0, 00:22:27.327 "delay_cmd_submit": true, 00:22:27.327 "transport_retry_count": 4, 00:22:27.327 "bdev_retry_count": 3, 00:22:27.327 "transport_ack_timeout": 0, 00:22:27.327 "ctrlr_loss_timeout_sec": 0, 00:22:27.327 "reconnect_delay_sec": 0, 00:22:27.327 "fast_io_fail_timeout_sec": 0, 00:22:27.327 "disable_auto_failback": false, 00:22:27.327 "generate_uuids": false, 00:22:27.327 "transport_tos": 0, 00:22:27.327 "nvme_error_stat": false, 00:22:27.327 "rdma_srq_size": 0, 00:22:27.327 "io_path_stat": false, 00:22:27.327 "allow_accel_sequence": false, 00:22:27.327 "rdma_max_cq_size": 0, 00:22:27.327 "rdma_cm_event_timeout_ms": 0, 00:22:27.327 "dhchap_digests": [ 00:22:27.327 "sha256", 00:22:27.327 "sha384", 00:22:27.327 "sha512" 00:22:27.327 ], 00:22:27.327 "dhchap_dhgroups": [ 00:22:27.327 "null", 00:22:27.327 "ffdhe2048", 00:22:27.327 "ffdhe3072", 00:22:27.327 "ffdhe4096", 00:22:27.327 "ffdhe6144", 00:22:27.327 "ffdhe8192" 00:22:27.327 ] 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_nvme_set_hotplug", 00:22:27.327 "params": { 00:22:27.327 "period_us": 100000, 00:22:27.327 "enable": false 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_malloc_create", 00:22:27.327 "params": { 00:22:27.327 "name": "malloc0", 00:22:27.327 "num_blocks": 8192, 00:22:27.327 "block_size": 4096, 00:22:27.327 "physical_block_size": 4096, 00:22:27.327 "uuid": "52b66bda-64f7-4c32-902f-5cec30c51ea3", 00:22:27.327 "optimal_io_boundary": 0 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "bdev_wait_for_examine" 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "nbd", 00:22:27.327 "config": [] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "scheduler", 00:22:27.327 "config": [ 00:22:27.327 { 00:22:27.327 "method": "framework_set_scheduler", 00:22:27.327 "params": { 00:22:27.327 "name": "static" 00:22:27.327 } 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "subsystem": "nvmf", 00:22:27.327 "config": [ 00:22:27.327 { 00:22:27.327 "method": "nvmf_set_config", 00:22:27.327 "params": { 00:22:27.327 "discovery_filter": "match_any", 00:22:27.327 "admin_cmd_passthru": { 00:22:27.327 "identify_ctrlr": false 00:22:27.327 } 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_set_max_subsystems", 00:22:27.327 "params": { 00:22:27.327 "max_subsystems": 1024 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_set_crdt", 00:22:27.327 "params": { 00:22:27.327 "crdt1": 0, 00:22:27.327 "crdt2": 0, 00:22:27.327 "crdt3": 0 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_create_transport", 00:22:27.327 "params": { 00:22:27.327 "trtype": "TCP", 00:22:27.327 "max_queue_depth": 128, 00:22:27.327 "max_io_qpairs_per_ctrlr": 127, 00:22:27.327 "in_capsule_data_size": 4096, 00:22:27.327 "max_io_size": 131072, 00:22:27.327 "io_unit_size": 131072, 00:22:27.327 "max_aq_depth": 128, 00:22:27.327 "num_shared_buffers": 511, 00:22:27.327 "buf_cache_size": 4294967295, 00:22:27.327 "dif_insert_or_strip": false, 00:22:27.327 "zcopy": false, 00:22:27.327 "c2h_success": false, 00:22:27.327 "sock_priority": 0, 00:22:27.327 "abort_timeout_sec": 1, 00:22:27.327 "ack_timeout": 0, 00:22:27.327 "data_wr_pool_size": 0 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_create_subsystem", 00:22:27.327 "params": { 00:22:27.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.327 "allow_any_host": false, 00:22:27.327 "serial_number": "00000000000000000000", 00:22:27.327 "model_number": "SPDK bdev Controller", 00:22:27.327 "max_namespaces": 32, 00:22:27.327 "min_cntlid": 1, 00:22:27.327 "max_cntlid": 65519, 00:22:27.327 "ana_reporting": false 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_subsystem_add_host", 00:22:27.327 "params": { 00:22:27.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.327 "host": "nqn.2016-06.io.spdk:host1", 00:22:27.327 "psk": "key0" 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_subsystem_add_ns", 00:22:27.327 "params": { 00:22:27.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.327 "namespace": { 00:22:27.327 "nsid": 1, 00:22:27.327 "bdev_name": "malloc0", 00:22:27.327 "nguid": "52B66BDA64F74C32902F5CEC30C51EA3", 00:22:27.327 "uuid": "52b66bda-64f7-4c32-902f-5cec30c51ea3", 00:22:27.327 "no_auto_visible": false 00:22:27.327 } 00:22:27.327 } 00:22:27.327 }, 00:22:27.327 { 00:22:27.327 "method": "nvmf_subsystem_add_listener", 00:22:27.327 "params": { 00:22:27.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.327 "listen_address": { 00:22:27.327 "trtype": "TCP", 00:22:27.327 "adrfam": "IPv4", 00:22:27.327 "traddr": "10.0.0.2", 00:22:27.327 "trsvcid": "4420" 00:22:27.327 }, 00:22:27.327 "secure_channel": false, 00:22:27.327 "sock_impl": "ssl" 00:22:27.327 } 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 } 00:22:27.327 ] 00:22:27.327 }' 00:22:27.327 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:27.895 "subsystems": [ 00:22:27.895 { 00:22:27.895 "subsystem": "keyring", 00:22:27.895 "config": [ 00:22:27.895 { 00:22:27.895 "method": "keyring_file_add_key", 00:22:27.895 "params": { 00:22:27.895 "name": "key0", 00:22:27.895 "path": "/tmp/tmp.O1tPXhzYW3" 00:22:27.895 } 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "iobuf", 00:22:27.895 "config": [ 00:22:27.895 { 00:22:27.895 "method": "iobuf_set_options", 00:22:27.895 "params": { 00:22:27.895 "small_pool_count": 8192, 00:22:27.895 "large_pool_count": 1024, 00:22:27.895 "small_bufsize": 8192, 00:22:27.895 "large_bufsize": 135168 00:22:27.895 } 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "sock", 00:22:27.895 "config": [ 00:22:27.895 { 00:22:27.895 "method": "sock_set_default_impl", 00:22:27.895 "params": { 00:22:27.895 "impl_name": "posix" 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "sock_impl_set_options", 00:22:27.895 "params": { 00:22:27.895 "impl_name": "ssl", 00:22:27.895 "recv_buf_size": 4096, 00:22:27.895 "send_buf_size": 4096, 00:22:27.895 "enable_recv_pipe": true, 00:22:27.895 "enable_quickack": false, 00:22:27.895 "enable_placement_id": 0, 00:22:27.895 "enable_zerocopy_send_server": true, 00:22:27.895 "enable_zerocopy_send_client": false, 00:22:27.895 "zerocopy_threshold": 0, 00:22:27.895 "tls_version": 0, 00:22:27.895 "enable_ktls": false 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "sock_impl_set_options", 00:22:27.895 "params": { 00:22:27.895 "impl_name": "posix", 00:22:27.895 "recv_buf_size": 2097152, 00:22:27.895 "send_buf_size": 2097152, 00:22:27.895 "enable_recv_pipe": true, 00:22:27.895 "enable_quickack": false, 00:22:27.895 "enable_placement_id": 0, 00:22:27.895 "enable_zerocopy_send_server": true, 00:22:27.895 "enable_zerocopy_send_client": false, 00:22:27.895 "zerocopy_threshold": 0, 00:22:27.895 "tls_version": 0, 00:22:27.895 "enable_ktls": false 00:22:27.895 } 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "vmd", 00:22:27.895 "config": [] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "accel", 00:22:27.895 "config": [ 00:22:27.895 { 00:22:27.895 "method": "accel_set_options", 00:22:27.895 "params": { 00:22:27.895 "small_cache_size": 128, 00:22:27.895 "large_cache_size": 16, 00:22:27.895 "task_count": 2048, 00:22:27.895 "sequence_count": 2048, 00:22:27.895 "buf_count": 2048 00:22:27.895 } 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "bdev", 00:22:27.895 "config": [ 00:22:27.895 { 00:22:27.895 "method": "bdev_set_options", 00:22:27.895 "params": { 00:22:27.895 "bdev_io_pool_size": 65535, 00:22:27.895 "bdev_io_cache_size": 256, 00:22:27.895 "bdev_auto_examine": true, 00:22:27.895 "iobuf_small_cache_size": 128, 00:22:27.895 "iobuf_large_cache_size": 16 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_raid_set_options", 00:22:27.895 "params": { 00:22:27.895 "process_window_size_kb": 1024 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_iscsi_set_options", 00:22:27.895 "params": { 00:22:27.895 "timeout_sec": 30 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_nvme_set_options", 00:22:27.895 "params": { 00:22:27.895 "action_on_timeout": "none", 00:22:27.895 "timeout_us": 0, 00:22:27.895 "timeout_admin_us": 0, 00:22:27.895 "keep_alive_timeout_ms": 10000, 00:22:27.895 "arbitration_burst": 0, 00:22:27.895 "low_priority_weight": 0, 00:22:27.895 "medium_priority_weight": 0, 00:22:27.895 "high_priority_weight": 0, 00:22:27.895 "nvme_adminq_poll_period_us": 10000, 00:22:27.895 "nvme_ioq_poll_period_us": 0, 00:22:27.895 "io_queue_requests": 512, 00:22:27.895 "delay_cmd_submit": true, 00:22:27.895 "transport_retry_count": 4, 00:22:27.895 "bdev_retry_count": 3, 00:22:27.895 "transport_ack_timeout": 0, 00:22:27.895 "ctrlr_loss_timeout_sec": 0, 00:22:27.895 "reconnect_delay_sec": 0, 00:22:27.895 "fast_io_fail_timeout_sec": 0, 00:22:27.895 "disable_auto_failback": false, 00:22:27.895 "generate_uuids": false, 00:22:27.895 "transport_tos": 0, 00:22:27.895 "nvme_error_stat": false, 00:22:27.895 "rdma_srq_size": 0, 00:22:27.895 "io_path_stat": false, 00:22:27.895 "allow_accel_sequence": false, 00:22:27.895 "rdma_max_cq_size": 0, 00:22:27.895 "rdma_cm_event_timeout_ms": 0, 00:22:27.895 "dhchap_digests": [ 00:22:27.895 "sha256", 00:22:27.895 "sha384", 00:22:27.895 "sha512" 00:22:27.895 ], 00:22:27.895 "dhchap_dhgroups": [ 00:22:27.895 "null", 00:22:27.895 "ffdhe2048", 00:22:27.895 "ffdhe3072", 00:22:27.895 "ffdhe4096", 00:22:27.895 "ffdhe6144", 00:22:27.895 "ffdhe8192" 00:22:27.895 ] 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_nvme_attach_controller", 00:22:27.895 "params": { 00:22:27.895 "name": "nvme0", 00:22:27.895 "trtype": "TCP", 00:22:27.895 "adrfam": "IPv4", 00:22:27.895 "traddr": "10.0.0.2", 00:22:27.895 "trsvcid": "4420", 00:22:27.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.895 "prchk_reftag": false, 00:22:27.895 "prchk_guard": false, 00:22:27.895 "ctrlr_loss_timeout_sec": 0, 00:22:27.895 "reconnect_delay_sec": 0, 00:22:27.895 "fast_io_fail_timeout_sec": 0, 00:22:27.895 "psk": "key0", 00:22:27.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.895 "hdgst": false, 00:22:27.895 "ddgst": false 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_nvme_set_hotplug", 00:22:27.895 "params": { 00:22:27.895 "period_us": 100000, 00:22:27.895 "enable": false 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_enable_histogram", 00:22:27.895 "params": { 00:22:27.895 "name": "nvme0n1", 00:22:27.895 "enable": true 00:22:27.895 } 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "method": "bdev_wait_for_examine" 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }, 00:22:27.895 { 00:22:27.895 "subsystem": "nbd", 00:22:27.895 "config": [] 00:22:27.895 } 00:22:27.895 ] 00:22:27.895 }' 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3091942 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3091942 ']' 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3091942 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3091942 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3091942' 00:22:27.895 killing process with pid 3091942 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3091942 00:22:27.895 Received shutdown signal, test time was about 1.000000 seconds 00:22:27.895 00:22:27.895 Latency(us) 00:22:27.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.895 =================================================================================================================== 00:22:27.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3091942 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3091781 00:22:27.895 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3091781 ']' 00:22:27.896 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3091781 00:22:27.896 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3091781 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3091781' 00:22:28.154 killing process with pid 3091781 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3091781 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3091781 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.154 00:48:45 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:28.154 "subsystems": [ 00:22:28.154 { 00:22:28.154 "subsystem": "keyring", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "keyring_file_add_key", 00:22:28.154 "params": { 00:22:28.154 "name": "key0", 00:22:28.154 "path": "/tmp/tmp.O1tPXhzYW3" 00:22:28.154 } 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "iobuf", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "iobuf_set_options", 00:22:28.154 "params": { 00:22:28.154 "small_pool_count": 8192, 00:22:28.154 "large_pool_count": 1024, 00:22:28.154 "small_bufsize": 8192, 00:22:28.154 "large_bufsize": 135168 00:22:28.154 } 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "sock", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "sock_set_default_impl", 00:22:28.154 "params": { 00:22:28.154 "impl_name": "posix" 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "sock_impl_set_options", 00:22:28.154 "params": { 00:22:28.154 "impl_name": "ssl", 00:22:28.154 "recv_buf_size": 4096, 00:22:28.154 "send_buf_size": 4096, 00:22:28.154 "enable_recv_pipe": true, 00:22:28.154 "enable_quickack": false, 00:22:28.154 "enable_placement_id": 0, 00:22:28.154 "enable_zerocopy_send_server": true, 00:22:28.154 "enable_zerocopy_send_client": false, 00:22:28.154 "zerocopy_threshold": 0, 00:22:28.154 "tls_version": 0, 00:22:28.154 "enable_ktls": false 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "sock_impl_set_options", 00:22:28.154 "params": { 00:22:28.154 "impl_name": "posix", 00:22:28.154 "recv_buf_size": 2097152, 00:22:28.154 "send_buf_size": 2097152, 00:22:28.154 "enable_recv_pipe": true, 00:22:28.154 "enable_quickack": false, 00:22:28.154 "enable_placement_id": 0, 00:22:28.154 "enable_zerocopy_send_server": true, 00:22:28.154 "enable_zerocopy_send_client": false, 00:22:28.154 "zerocopy_threshold": 0, 00:22:28.154 "tls_version": 0, 00:22:28.154 "enable_ktls": false 00:22:28.154 } 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "vmd", 00:22:28.154 "config": [] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "accel", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "accel_set_options", 00:22:28.154 "params": { 00:22:28.154 "small_cache_size": 128, 00:22:28.154 "large_cache_size": 16, 00:22:28.154 "task_count": 2048, 00:22:28.154 "sequence_count": 2048, 00:22:28.154 "buf_count": 2048 00:22:28.154 } 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "bdev", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "bdev_set_options", 00:22:28.154 "params": { 00:22:28.154 "bdev_io_pool_size": 65535, 00:22:28.154 "bdev_io_cache_size": 256, 00:22:28.154 "bdev_auto_examine": true, 00:22:28.154 "iobuf_small_cache_size": 128, 00:22:28.154 "iobuf_large_cache_size": 16 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_raid_set_options", 00:22:28.154 "params": { 00:22:28.154 "process_window_size_kb": 1024 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_iscsi_set_options", 00:22:28.154 "params": { 00:22:28.154 "timeout_sec": 30 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_nvme_set_options", 00:22:28.154 "params": { 00:22:28.154 "action_on_timeout": "none", 00:22:28.154 "timeout_us": 0, 00:22:28.154 "timeout_admin_us": 0, 00:22:28.154 "keep_alive_timeout_ms": 10000, 00:22:28.154 "arbitration_burst": 0, 00:22:28.154 "low_priority_weight": 0, 00:22:28.154 "medium_priority_weight": 0, 00:22:28.154 "high_priority_weight": 0, 00:22:28.154 "nvme_adminq_poll_period_us": 10000, 00:22:28.154 "nvme_ioq_poll_period_us": 0, 00:22:28.154 "io_queue_requests": 0, 00:22:28.154 "delay_cmd_submit": true, 00:22:28.154 "transport_retry_count": 4, 00:22:28.154 "bdev_retry_count": 3, 00:22:28.154 "transport_ack_timeout": 0, 00:22:28.154 "ctrlr_loss_timeout_sec": 0, 00:22:28.154 "reconnect_delay_sec": 0, 00:22:28.154 "fast_io_fail_timeout_sec": 0, 00:22:28.154 "disable_auto_failback": false, 00:22:28.154 "generate_uuids": false, 00:22:28.154 "transport_tos": 0, 00:22:28.154 "nvme_error_stat": false, 00:22:28.154 "rdma_srq_size": 0, 00:22:28.154 "io_path_stat": false, 00:22:28.154 "allow_accel_sequence": false, 00:22:28.154 "rdma_max_cq_size": 0, 00:22:28.154 "rdma_cm_event_timeout_ms": 0, 00:22:28.154 "dhchap_digests": [ 00:22:28.154 "sha256", 00:22:28.154 "sha384", 00:22:28.154 "sha512" 00:22:28.154 ], 00:22:28.154 "dhchap_dhgroups": [ 00:22:28.154 "null", 00:22:28.154 "ffdhe2048", 00:22:28.154 "ffdhe3072", 00:22:28.154 "ffdhe4096", 00:22:28.154 "ffdhe6144", 00:22:28.154 "ffdhe8192" 00:22:28.154 ] 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_nvme_set_hotplug", 00:22:28.154 "params": { 00:22:28.154 "period_us": 100000, 00:22:28.154 "enable": false 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_malloc_create", 00:22:28.154 "params": { 00:22:28.154 "name": "malloc0", 00:22:28.154 "num_blocks": 8192, 00:22:28.154 "block_size": 4096, 00:22:28.154 "physical_block_size": 4096, 00:22:28.154 "uuid": "52b66bda-64f7-4c32-902f-5cec30c51ea3", 00:22:28.154 "optimal_io_boundary": 0 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "bdev_wait_for_examine" 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "nbd", 00:22:28.154 "config": [] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "scheduler", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "framework_set_scheduler", 00:22:28.154 "params": { 00:22:28.154 "name": "static" 00:22:28.154 } 00:22:28.154 } 00:22:28.154 ] 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "subsystem": "nvmf", 00:22:28.154 "config": [ 00:22:28.154 { 00:22:28.154 "method": "nvmf_set_config", 00:22:28.154 "params": { 00:22:28.154 "discovery_filter": "match_any", 00:22:28.154 "admin_cmd_passthru": { 00:22:28.154 "identify_ctrlr": false 00:22:28.154 } 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "nvmf_set_max_subsystems", 00:22:28.154 "params": { 00:22:28.154 "max_subsystems": 1024 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "nvmf_set_crdt", 00:22:28.154 "params": { 00:22:28.154 "crdt1": 0, 00:22:28.154 "crdt2": 0, 00:22:28.154 "crdt3": 0 00:22:28.154 } 00:22:28.154 }, 00:22:28.154 { 00:22:28.154 "method": "nvmf_create_transport", 00:22:28.154 "params": { 00:22:28.154 "trtype": "TCP", 00:22:28.154 "max_queue_depth": 128, 00:22:28.154 "max_io_qpairs_per_ctrlr": 127, 00:22:28.154 "in_capsule_data_size": 4096, 00:22:28.154 "max_io_size": 131072, 00:22:28.154 "io_unit_size": 131072, 00:22:28.154 "max_aq_depth": 128, 00:22:28.154 "num_shared_buffers": 511, 00:22:28.154 "buf_cache_size": 4294967295, 00:22:28.154 "dif_insert_or_strip": false, 00:22:28.154 "zcopy": false, 00:22:28.155 "c2h_success": false, 00:22:28.155 "sock_priority": 0, 00:22:28.155 "abort_timeout_sec": 1, 00:22:28.155 "ack_timeout": 0, 00:22:28.155 "data_wr_pool_size": 0 00:22:28.155 } 00:22:28.155 }, 00:22:28.155 { 00:22:28.155 "method": "nvmf_create_subsystem", 00:22:28.155 "params": { 00:22:28.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.155 "allow_any_host": false, 00:22:28.155 "serial_number": "00000000000000000000", 00:22:28.155 "model_number": "SPDK bdev Controller", 00:22:28.155 "max_namespaces": 32, 00:22:28.155 "min_cntlid": 1, 00:22:28.155 "max_cntlid": 65519, 00:22:28.155 "ana_reporting": false 00:22:28.155 } 00:22:28.155 }, 00:22:28.155 { 00:22:28.155 "method": "nvmf_subsystem_add_host", 00:22:28.155 "params": { 00:22:28.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.155 "host": "nqn.2016-06.io.spdk:host1", 00:22:28.155 "psk": "key0" 00:22:28.155 } 00:22:28.155 }, 00:22:28.155 { 00:22:28.155 "method": "nvmf_subsystem_add_ns", 00:22:28.155 "params": { 00:22:28.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.155 "namespace": { 00:22:28.155 "nsid": 1, 00:22:28.155 "bdev_name": "malloc0", 00:22:28.155 "nguid": "52B66BDA64F74C32902F5CEC30C51EA3", 00:22:28.155 "uuid": "52b66bda-64f7-4c32-902f-5cec30c51ea3", 00:22:28.155 "no_auto_visible": false 00:22:28.155 } 00:22:28.155 } 00:22:28.155 }, 00:22:28.155 { 00:22:28.155 "method": "nvmf_subsystem_add_listener", 00:22:28.155 "params": { 00:22:28.155 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.155 "listen_address": { 00:22:28.155 "trtype": "TCP", 00:22:28.155 "adrfam": "IPv4", 00:22:28.155 "traddr": "10.0.0.2", 00:22:28.155 "trsvcid": "4420" 00:22:28.155 }, 00:22:28.155 "secure_channel": false, 00:22:28.155 "sock_impl": "ssl" 00:22:28.155 } 00:22:28.155 } 00:22:28.155 ] 00:22:28.155 } 00:22:28.155 ] 00:22:28.155 }' 00:22:28.155 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3092482 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3092482 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3092482 ']' 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.412 00:48:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.412 [2024-07-16 00:48:46.050659] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:28.412 [2024-07-16 00:48:46.050718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.412 [2024-07-16 00:48:46.137825] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.412 [2024-07-16 00:48:46.227020] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.412 [2024-07-16 00:48:46.227063] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.412 [2024-07-16 00:48:46.227073] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.412 [2024-07-16 00:48:46.227082] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.412 [2024-07-16 00:48:46.227089] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.413 [2024-07-16 00:48:46.227147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.671 [2024-07-16 00:48:46.446153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.671 [2024-07-16 00:48:46.478157] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.671 [2024-07-16 00:48:46.490561] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.235 00:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.235 00:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:29.235 00:48:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.235 00:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.235 00:48:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3092758 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3092758 /var/tmp/bdevperf.sock 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3092758 ']' 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.235 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.236 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.236 00:48:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:29.236 "subsystems": [ 00:22:29.236 { 00:22:29.236 "subsystem": "keyring", 00:22:29.236 "config": [ 00:22:29.236 { 00:22:29.236 "method": "keyring_file_add_key", 00:22:29.236 "params": { 00:22:29.236 "name": "key0", 00:22:29.236 "path": "/tmp/tmp.O1tPXhzYW3" 00:22:29.236 } 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "iobuf", 00:22:29.236 "config": [ 00:22:29.236 { 00:22:29.236 "method": "iobuf_set_options", 00:22:29.236 "params": { 00:22:29.236 "small_pool_count": 8192, 00:22:29.236 "large_pool_count": 1024, 00:22:29.236 "small_bufsize": 8192, 00:22:29.236 "large_bufsize": 135168 00:22:29.236 } 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "sock", 00:22:29.236 "config": [ 00:22:29.236 { 00:22:29.236 "method": "sock_set_default_impl", 00:22:29.236 "params": { 00:22:29.236 "impl_name": "posix" 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "sock_impl_set_options", 00:22:29.236 "params": { 00:22:29.236 "impl_name": "ssl", 00:22:29.236 "recv_buf_size": 4096, 00:22:29.236 "send_buf_size": 4096, 00:22:29.236 "enable_recv_pipe": true, 00:22:29.236 "enable_quickack": false, 00:22:29.236 "enable_placement_id": 0, 00:22:29.236 "enable_zerocopy_send_server": true, 00:22:29.236 "enable_zerocopy_send_client": false, 00:22:29.236 "zerocopy_threshold": 0, 00:22:29.236 "tls_version": 0, 00:22:29.236 "enable_ktls": false 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "sock_impl_set_options", 00:22:29.236 "params": { 00:22:29.236 "impl_name": "posix", 00:22:29.236 "recv_buf_size": 2097152, 00:22:29.236 "send_buf_size": 2097152, 00:22:29.236 "enable_recv_pipe": true, 00:22:29.236 "enable_quickack": false, 00:22:29.236 "enable_placement_id": 0, 00:22:29.236 "enable_zerocopy_send_server": true, 00:22:29.236 "enable_zerocopy_send_client": false, 00:22:29.236 "zerocopy_threshold": 0, 00:22:29.236 "tls_version": 0, 00:22:29.236 "enable_ktls": false 00:22:29.236 } 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "vmd", 00:22:29.236 "config": [] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "accel", 00:22:29.236 "config": [ 00:22:29.236 { 00:22:29.236 "method": "accel_set_options", 00:22:29.236 "params": { 00:22:29.236 "small_cache_size": 128, 00:22:29.236 "large_cache_size": 16, 00:22:29.236 "task_count": 2048, 00:22:29.236 "sequence_count": 2048, 00:22:29.236 "buf_count": 2048 00:22:29.236 } 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "bdev", 00:22:29.236 "config": [ 00:22:29.236 { 00:22:29.236 "method": "bdev_set_options", 00:22:29.236 "params": { 00:22:29.236 "bdev_io_pool_size": 65535, 00:22:29.236 "bdev_io_cache_size": 256, 00:22:29.236 "bdev_auto_examine": true, 00:22:29.236 "iobuf_small_cache_size": 128, 00:22:29.236 "iobuf_large_cache_size": 16 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_raid_set_options", 00:22:29.236 "params": { 00:22:29.236 "process_window_size_kb": 1024 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_iscsi_set_options", 00:22:29.236 "params": { 00:22:29.236 "timeout_sec": 30 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_nvme_set_options", 00:22:29.236 "params": { 00:22:29.236 "action_on_timeout": "none", 00:22:29.236 "timeout_us": 0, 00:22:29.236 "timeout_admin_us": 0, 00:22:29.236 "keep_alive_timeout_ms": 10000, 00:22:29.236 "arbitration_burst": 0, 00:22:29.236 "low_priority_weight": 0, 00:22:29.236 "medium_priority_weight": 0, 00:22:29.236 "high_priority_weight": 0, 00:22:29.236 "nvme_adminq_poll_period_us": 10000, 00:22:29.236 "nvme_ioq_poll_period_us": 0, 00:22:29.236 "io_queue_requests": 512, 00:22:29.236 "delay_cmd_submit": true, 00:22:29.236 "transport_retry_count": 4, 00:22:29.236 "bdev_retry_count": 3, 00:22:29.236 "transport_ack_timeout": 0, 00:22:29.236 "ctrlr_loss_timeout_sec": 0, 00:22:29.236 "reconnect_delay_sec": 0, 00:22:29.236 "fast_io_fail_timeout_sec": 0, 00:22:29.236 "disable_auto_failback": false, 00:22:29.236 "generate_uuids": false, 00:22:29.236 "transport_tos": 0, 00:22:29.236 "nvme_error_stat": false, 00:22:29.236 "rdma_srq_size": 0, 00:22:29.236 "io_path_stat": false, 00:22:29.236 "allow_accel_sequence": false, 00:22:29.236 "rdma_max_cq_size": 0, 00:22:29.236 "rdma_cm_event_timeout_ms": 0, 00:22:29.236 "dhchap_digests": [ 00:22:29.236 "sha256", 00:22:29.236 "sha384", 00:22:29.236 "sha512" 00:22:29.236 ], 00:22:29.236 "dhchap_dhgroups": [ 00:22:29.236 "null", 00:22:29.236 "ffdhe2048", 00:22:29.236 "ffdhe3072", 00:22:29.236 "ffdhe4096", 00:22:29.236 "ffdhe6144", 00:22:29.236 "ffdhe8192" 00:22:29.236 ] 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_nvme_attach_controller", 00:22:29.236 "params": { 00:22:29.236 "name": "nvme0", 00:22:29.236 "trtype": "TCP", 00:22:29.236 "adrfam": "IPv4", 00:22:29.236 "traddr": "10.0.0.2", 00:22:29.236 "trsvcid": "4420", 00:22:29.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.236 "prchk_reftag": false, 00:22:29.236 "prchk_guard": false, 00:22:29.236 "ctrlr_loss_timeout_sec": 0, 00:22:29.236 "reconnect_delay_sec": 0, 00:22:29.236 "fast_io_fail_timeout_sec": 0, 00:22:29.236 "psk": "key0", 00:22:29.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.236 "hdgst": false, 00:22:29.236 "ddgst": false 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_nvme_set_hotplug", 00:22:29.236 "params": { 00:22:29.236 "period_us": 100000, 00:22:29.236 "enable": false 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_enable_histogram", 00:22:29.236 "params": { 00:22:29.236 "name": "nvme0n1", 00:22:29.236 "enable": true 00:22:29.236 } 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "method": "bdev_wait_for_examine" 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }, 00:22:29.236 { 00:22:29.236 "subsystem": "nbd", 00:22:29.236 "config": [] 00:22:29.236 } 00:22:29.236 ] 00:22:29.236 }' 00:22:29.236 00:48:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.236 [2024-07-16 00:48:47.072049] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:29.236 [2024-07-16 00:48:47.072109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092758 ] 00:22:29.493 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.493 [2024-07-16 00:48:47.155698] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.493 [2024-07-16 00:48:47.260278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.765 [2024-07-16 00:48:47.424354] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.390 00:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.390 00:48:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:30.390 00:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.390 00:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:30.648 00:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.648 00:48:48 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.648 Running I/O for 1 seconds... 00:22:31.584 00:22:31.584 Latency(us) 00:22:31.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.584 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:31.584 Verification LBA range: start 0x0 length 0x2000 00:22:31.584 nvme0n1 : 1.03 3532.22 13.80 0.00 0.00 35791.54 8877.15 66250.94 00:22:31.584 =================================================================================================================== 00:22:31.584 Total : 3532.22 13.80 0.00 0.00 35791.54 8877.15 66250.94 00:22:31.584 0 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:31.584 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:31.843 nvmf_trace.0 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3092758 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3092758 ']' 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3092758 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092758 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092758' 00:22:31.843 killing process with pid 3092758 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3092758 00:22:31.843 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.843 00:22:31.843 Latency(us) 00:22:31.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.843 =================================================================================================================== 00:22:31.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.843 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3092758 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.103 rmmod nvme_tcp 00:22:32.103 rmmod nvme_fabrics 00:22:32.103 rmmod nvme_keyring 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3092482 ']' 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3092482 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3092482 ']' 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3092482 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3092482 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3092482' 00:22:32.103 killing process with pid 3092482 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3092482 00:22:32.103 00:48:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3092482 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.362 00:48:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.896 00:48:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:34.896 00:48:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qMjX2YnxCF /tmp/tmp.agcjQkjSvw /tmp/tmp.O1tPXhzYW3 00:22:34.896 00:22:34.896 real 1m34.599s 00:22:34.896 user 2m34.319s 00:22:34.896 sys 0m27.380s 00:22:34.896 00:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.896 00:48:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.896 ************************************ 00:22:34.896 END TEST nvmf_tls 00:22:34.896 ************************************ 00:22:34.896 00:48:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:34.896 00:48:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:34.896 00:48:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:34.896 00:48:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.896 00:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.896 ************************************ 00:22:34.896 START TEST nvmf_fips 00:22:34.896 ************************************ 00:22:34.896 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:34.896 * Looking for test storage... 00:22:34.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:34.896 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.896 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:34.896 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.896 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:34.897 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:34.898 Error setting digest 00:22:34.898 0062EB1E177F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:34.898 0062EB1E177F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.898 00:48:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:41.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:41.467 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:41.467 Found net devices under 0000:af:00.0: cvl_0_0 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:41.467 Found net devices under 0000:af:00.1: cvl_0_1 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.467 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:22:41.468 00:22:41.468 --- 10.0.0.2 ping statistics --- 00:22:41.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.468 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:22:41.468 00:22:41.468 --- 10.0.0.1 ping statistics --- 00:22:41.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.468 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3096828 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3096828 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3096828 ']' 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.468 00:48:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.468 [2024-07-16 00:48:58.461521] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:41.468 [2024-07-16 00:48:58.461586] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.468 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.468 [2024-07-16 00:48:58.550543] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.468 [2024-07-16 00:48:58.654731] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.468 [2024-07-16 00:48:58.654775] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.468 [2024-07-16 00:48:58.654788] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.468 [2024-07-16 00:48:58.654799] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.468 [2024-07-16 00:48:58.654809] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.468 [2024-07-16 00:48:58.654842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.727 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.986 [2024-07-16 00:48:59.640518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.986 [2024-07-16 00:48:59.656494] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.986 [2024-07-16 00:48:59.656709] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.986 [2024-07-16 00:48:59.686927] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:41.986 malloc0 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3097085 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3097085 /var/tmp/bdevperf.sock 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3097085 ']' 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.986 00:48:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.986 [2024-07-16 00:48:59.791500] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:22:41.986 [2024-07-16 00:48:59.791560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097085 ] 00:22:41.986 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.245 [2024-07-16 00:48:59.908197] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.245 [2024-07-16 00:49:00.069609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.181 00:49:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.181 00:49:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:43.181 00:49:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:43.181 [2024-07-16 00:49:00.841276] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.181 [2024-07-16 00:49:00.841438] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:43.181 TLSTESTn1 00:22:43.181 00:49:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.440 Running I/O for 10 seconds... 00:22:53.412 00:22:53.412 Latency(us) 00:22:53.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:53.412 Verification LBA range: start 0x0 length 0x2000 00:22:53.412 TLSTESTn1 : 10.02 2812.34 10.99 0.00 0.00 45393.95 9532.51 75783.45 00:22:53.412 =================================================================================================================== 00:22:53.412 Total : 2812.34 10.99 0.00 0.00 45393.95 9532.51 75783.45 00:22:53.412 0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:53.412 nvmf_trace.0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3097085 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3097085 ']' 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3097085 00:22:53.412 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3097085 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3097085' 00:22:53.670 killing process with pid 3097085 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3097085 00:22:53.670 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.670 00:22:53.670 Latency(us) 00:22:53.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.670 =================================================================================================================== 00:22:53.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.670 [2024-07-16 00:49:11.298365] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:53.670 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3097085 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.929 rmmod nvme_tcp 00:22:53.929 rmmod nvme_fabrics 00:22:53.929 rmmod nvme_keyring 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3096828 ']' 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3096828 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3096828 ']' 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3096828 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3096828 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3096828' 00:22:53.929 killing process with pid 3096828 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3096828 00:22:53.929 [2024-07-16 00:49:11.733607] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:53.929 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3096828 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.188 00:49:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.724 00:49:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.724 00:49:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:56.724 00:22:56.724 real 0m21.802s 00:22:56.724 user 0m24.856s 00:22:56.724 sys 0m8.443s 00:22:56.724 00:49:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.724 00:49:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:56.724 ************************************ 00:22:56.724 END TEST nvmf_fips 00:22:56.724 ************************************ 00:22:56.724 00:49:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:56.724 00:49:14 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:56.724 00:49:14 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:56.724 00:49:14 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:56.724 00:49:14 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:56.724 00:49:14 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.724 00:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:02.000 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.000 00:49:19 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:02.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:02.001 Found net devices under 0000:af:00.0: cvl_0_0 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:02.001 Found net devices under 0000:af:00.1: cvl_0_1 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:02.001 00:49:19 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:02.001 00:49:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.001 00:49:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.001 00:49:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.001 ************************************ 00:23:02.001 START TEST nvmf_perf_adq 00:23:02.001 ************************************ 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:02.001 * Looking for test storage... 00:23:02.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.001 00:49:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:08.568 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.568 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:08.569 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:08.569 Found net devices under 0000:af:00.0: cvl_0_0 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:08.569 Found net devices under 0000:af:00.1: cvl_0_1 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:08.569 00:49:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:08.829 00:49:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:10.734 00:49:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.008 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.008 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.008 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:23:16.009 00:23:16.009 --- 10.0.0.2 ping statistics --- 00:23:16.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.009 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:23:16.009 00:23:16.009 --- 10.0.0.1 ping statistics --- 00:23:16.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.009 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3107551 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3107551 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3107551 ']' 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.009 00:49:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.267 [2024-07-16 00:49:33.881467] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:23:16.267 [2024-07-16 00:49:33.881528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.267 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.267 [2024-07-16 00:49:33.968627] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.267 [2024-07-16 00:49:34.061634] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.267 [2024-07-16 00:49:34.061676] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.267 [2024-07-16 00:49:34.061686] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.267 [2024-07-16 00:49:34.061695] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.267 [2024-07-16 00:49:34.061703] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.267 [2024-07-16 00:49:34.061746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.267 [2024-07-16 00:49:34.062195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.267 [2024-07-16 00:49:34.062297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.267 [2024-07-16 00:49:34.062299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.202 00:49:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.202 [2024-07-16 00:49:35.015978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.202 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:17.203 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.203 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.462 Malloc1 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.462 [2024-07-16 00:49:35.079677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3107722 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:17.462 00:49:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:17.462 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.367 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:19.367 00:49:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.367 00:49:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.367 00:49:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.367 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:19.367 "tick_rate": 2200000000, 00:23:19.367 "poll_groups": [ 00:23:19.367 { 00:23:19.367 "name": "nvmf_tgt_poll_group_000", 00:23:19.367 "admin_qpairs": 1, 00:23:19.367 "io_qpairs": 1, 00:23:19.367 "current_admin_qpairs": 1, 00:23:19.367 "current_io_qpairs": 1, 00:23:19.367 "pending_bdev_io": 0, 00:23:19.367 "completed_nvme_io": 11794, 00:23:19.367 "transports": [ 00:23:19.367 { 00:23:19.367 "trtype": "TCP" 00:23:19.367 } 00:23:19.367 ] 00:23:19.367 }, 00:23:19.367 { 00:23:19.367 "name": "nvmf_tgt_poll_group_001", 00:23:19.367 "admin_qpairs": 0, 00:23:19.367 "io_qpairs": 1, 00:23:19.367 "current_admin_qpairs": 0, 00:23:19.367 "current_io_qpairs": 1, 00:23:19.367 "pending_bdev_io": 0, 00:23:19.367 "completed_nvme_io": 8146, 00:23:19.367 "transports": [ 00:23:19.367 { 00:23:19.367 "trtype": "TCP" 00:23:19.367 } 00:23:19.367 ] 00:23:19.367 }, 00:23:19.367 { 00:23:19.367 "name": "nvmf_tgt_poll_group_002", 00:23:19.367 "admin_qpairs": 0, 00:23:19.367 "io_qpairs": 1, 00:23:19.367 "current_admin_qpairs": 0, 00:23:19.367 "current_io_qpairs": 1, 00:23:19.367 "pending_bdev_io": 0, 00:23:19.367 "completed_nvme_io": 8244, 00:23:19.367 "transports": [ 00:23:19.367 { 00:23:19.367 "trtype": "TCP" 00:23:19.367 } 00:23:19.367 ] 00:23:19.367 }, 00:23:19.367 { 00:23:19.367 "name": "nvmf_tgt_poll_group_003", 00:23:19.367 "admin_qpairs": 0, 00:23:19.367 "io_qpairs": 1, 00:23:19.367 "current_admin_qpairs": 0, 00:23:19.367 "current_io_qpairs": 1, 00:23:19.367 "pending_bdev_io": 0, 00:23:19.367 "completed_nvme_io": 13927, 00:23:19.367 "transports": [ 00:23:19.367 { 00:23:19.367 "trtype": "TCP" 00:23:19.367 } 00:23:19.367 ] 00:23:19.368 } 00:23:19.368 ] 00:23:19.368 }' 00:23:19.368 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:19.368 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:19.368 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:19.368 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:19.368 00:49:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3107722 00:23:27.567 Initializing NVMe Controllers 00:23:27.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:27.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:27.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:27.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:27.567 Initialization complete. Launching workers. 00:23:27.567 ======================================================== 00:23:27.567 Latency(us) 00:23:27.567 Device Information : IOPS MiB/s Average min max 00:23:27.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4369.50 17.07 14658.55 7632.58 21273.56 00:23:27.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4328.40 16.91 14789.83 5452.00 23155.56 00:23:27.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7310.90 28.56 8758.83 4487.54 13343.22 00:23:27.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6261.00 24.46 10229.49 3656.18 15092.96 00:23:27.567 ======================================================== 00:23:27.567 Total : 22269.80 86.99 11502.06 3656.18 23155.56 00:23:27.567 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:27.567 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.568 rmmod nvme_tcp 00:23:27.568 rmmod nvme_fabrics 00:23:27.568 rmmod nvme_keyring 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3107551 ']' 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3107551 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3107551 ']' 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3107551 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3107551 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3107551' 00:23:27.568 killing process with pid 3107551 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3107551 00:23:27.568 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3107551 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.827 00:49:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.363 00:49:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.363 00:49:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:30.363 00:49:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:31.298 00:49:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:33.833 00:49:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.105 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:39.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:39.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:39.106 Found net devices under 0000:af:00.0: cvl_0_0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:39.106 Found net devices under 0000:af:00.1: cvl_0_1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:39.106 00:23:39.106 --- 10.0.0.2 ping statistics --- 00:23:39.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.106 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:23:39.106 00:23:39.106 --- 10.0.0.1 ping statistics --- 00:23:39.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.106 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:39.106 net.core.busy_poll = 1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:39.106 net.core.busy_read = 1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3111896 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3111896 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3111896 ']' 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.106 00:49:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:39.106 [2024-07-16 00:49:56.870547] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:23:39.106 [2024-07-16 00:49:56.870606] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.106 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.364 [2024-07-16 00:49:56.959414] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.364 [2024-07-16 00:49:57.051319] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.364 [2024-07-16 00:49:57.051361] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.364 [2024-07-16 00:49:57.051372] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.364 [2024-07-16 00:49:57.051381] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.364 [2024-07-16 00:49:57.051390] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.364 [2024-07-16 00:49:57.051442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.364 [2024-07-16 00:49:57.051555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.364 [2024-07-16 00:49:57.051665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.364 [2024-07-16 00:49:57.051665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.930 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.930 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:39.930 00:49:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.930 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.930 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.187 00:49:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.187 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:40.187 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:40.187 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 [2024-07-16 00:49:57.928792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 Malloc1 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.188 [2024-07-16 00:49:57.980675] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3112165 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:40.188 00:49:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:40.188 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.718 00:49:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:42.718 00:49:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.718 00:49:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:42.718 "tick_rate": 2200000000, 00:23:42.718 "poll_groups": [ 00:23:42.718 { 00:23:42.718 "name": "nvmf_tgt_poll_group_000", 00:23:42.718 "admin_qpairs": 1, 00:23:42.718 "io_qpairs": 2, 00:23:42.718 "current_admin_qpairs": 1, 00:23:42.718 "current_io_qpairs": 2, 00:23:42.718 "pending_bdev_io": 0, 00:23:42.718 "completed_nvme_io": 16672, 00:23:42.718 "transports": [ 00:23:42.718 { 00:23:42.718 "trtype": "TCP" 00:23:42.718 } 00:23:42.718 ] 00:23:42.718 }, 00:23:42.718 { 00:23:42.718 "name": "nvmf_tgt_poll_group_001", 00:23:42.718 "admin_qpairs": 0, 00:23:42.718 "io_qpairs": 2, 00:23:42.718 "current_admin_qpairs": 0, 00:23:42.718 "current_io_qpairs": 2, 00:23:42.718 "pending_bdev_io": 0, 00:23:42.718 "completed_nvme_io": 10560, 00:23:42.718 "transports": [ 00:23:42.718 { 00:23:42.718 "trtype": "TCP" 00:23:42.718 } 00:23:42.718 ] 00:23:42.718 }, 00:23:42.718 { 00:23:42.718 "name": "nvmf_tgt_poll_group_002", 00:23:42.718 "admin_qpairs": 0, 00:23:42.718 "io_qpairs": 0, 00:23:42.718 "current_admin_qpairs": 0, 00:23:42.718 "current_io_qpairs": 0, 00:23:42.718 "pending_bdev_io": 0, 00:23:42.718 "completed_nvme_io": 0, 00:23:42.718 "transports": [ 00:23:42.718 { 00:23:42.718 "trtype": "TCP" 00:23:42.718 } 00:23:42.718 ] 00:23:42.718 }, 00:23:42.718 { 00:23:42.718 "name": "nvmf_tgt_poll_group_003", 00:23:42.718 "admin_qpairs": 0, 00:23:42.718 "io_qpairs": 0, 00:23:42.718 "current_admin_qpairs": 0, 00:23:42.718 "current_io_qpairs": 0, 00:23:42.718 "pending_bdev_io": 0, 00:23:42.718 "completed_nvme_io": 0, 00:23:42.718 "transports": [ 00:23:42.718 { 00:23:42.718 "trtype": "TCP" 00:23:42.718 } 00:23:42.718 ] 00:23:42.718 } 00:23:42.718 ] 00:23:42.718 }' 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:42.718 00:50:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3112165 00:23:50.836 Initializing NVMe Controllers 00:23:50.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:50.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:50.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:50.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:50.836 Initialization complete. Launching workers. 00:23:50.836 ======================================================== 00:23:50.836 Latency(us) 00:23:50.836 Device Information : IOPS MiB/s Average min max 00:23:50.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 2706.60 10.57 23664.42 5612.88 74017.17 00:23:50.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 2929.10 11.44 21860.73 3424.54 71088.75 00:23:50.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4557.70 17.80 14057.11 2286.67 61848.68 00:23:50.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4107.00 16.04 15596.74 2444.41 60686.20 00:23:50.836 ======================================================== 00:23:50.836 Total : 14300.40 55.86 17916.02 2286.67 74017.17 00:23:50.836 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.836 rmmod nvme_tcp 00:23:50.836 rmmod nvme_fabrics 00:23:50.836 rmmod nvme_keyring 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3111896 ']' 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3111896 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3111896 ']' 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3111896 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3111896 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3111896' 00:23:50.836 killing process with pid 3111896 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3111896 00:23:50.836 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3111896 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.837 00:50:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.125 00:50:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.125 00:50:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:54.125 00:23:54.125 real 0m52.032s 00:23:54.125 user 2m50.945s 00:23:54.125 sys 0m9.635s 00:23:54.125 00:50:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:54.125 00:50:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:54.125 ************************************ 00:23:54.125 END TEST nvmf_perf_adq 00:23:54.125 ************************************ 00:23:54.125 00:50:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:54.125 00:50:11 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:54.125 00:50:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:54.125 00:50:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.125 00:50:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.125 ************************************ 00:23:54.125 START TEST nvmf_shutdown 00:23:54.125 ************************************ 00:23:54.125 00:50:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:54.125 * Looking for test storage... 00:23:54.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.125 00:50:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.125 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:54.125 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:54.126 ************************************ 00:23:54.126 START TEST nvmf_shutdown_tc1 00:23:54.126 ************************************ 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.126 00:50:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.716 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.717 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.717 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:00.717 00:24:00.717 --- 10.0.0.2 ping statistics --- 00:24:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.717 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:24:00.717 00:24:00.717 --- 10.0.0.1 ping statistics --- 00:24:00.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.717 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.717 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3117811 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3117811 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3117811 ']' 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.718 00:50:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:00.718 [2024-07-16 00:50:17.830185] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:00.718 [2024-07-16 00:50:17.830247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.718 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.718 [2024-07-16 00:50:17.919715] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.718 [2024-07-16 00:50:18.025653] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.718 [2024-07-16 00:50:18.025699] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.718 [2024-07-16 00:50:18.025712] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.718 [2024-07-16 00:50:18.025723] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.718 [2024-07-16 00:50:18.025732] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.718 [2024-07-16 00:50:18.025853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.718 [2024-07-16 00:50:18.025886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.718 [2024-07-16 00:50:18.026006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.718 [2024-07-16 00:50:18.026008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.976 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.976 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:00.976 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.976 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.976 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.233 [2024-07-16 00:50:18.822888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.233 00:50:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.233 Malloc1 00:24:01.233 [2024-07-16 00:50:18.929485] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.233 Malloc2 00:24:01.233 Malloc3 00:24:01.233 Malloc4 00:24:01.491 Malloc5 00:24:01.491 Malloc6 00:24:01.491 Malloc7 00:24:01.491 Malloc8 00:24:01.491 Malloc9 00:24:01.491 Malloc10 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3118152 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3118152 /var/tmp/bdevperf.sock 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3118152 ']' 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 [2024-07-16 00:50:19.423242] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:01.749 [2024-07-16 00:50:19.423311] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.749 { 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme$subsystem", 00:24:01.749 "trtype": "$TEST_TRANSPORT", 00:24:01.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "$NVMF_PORT", 00:24:01.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.749 "hdgst": ${hdgst:-false}, 00:24:01.749 "ddgst": ${ddgst:-false} 00:24:01.749 }, 00:24:01.749 "method": "bdev_nvme_attach_controller" 00:24:01.749 } 00:24:01.749 EOF 00:24:01.749 )") 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:01.749 00:50:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.749 "params": { 00:24:01.749 "name": "Nvme1", 00:24:01.749 "trtype": "tcp", 00:24:01.749 "traddr": "10.0.0.2", 00:24:01.749 "adrfam": "ipv4", 00:24:01.749 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme2", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme3", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme4", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme5", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme6", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme7", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme8", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme9", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 },{ 00:24:01.750 "params": { 00:24:01.750 "name": "Nvme10", 00:24:01.750 "trtype": "tcp", 00:24:01.750 "traddr": "10.0.0.2", 00:24:01.750 "adrfam": "ipv4", 00:24:01.750 "trsvcid": "4420", 00:24:01.750 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:01.750 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:01.750 "hdgst": false, 00:24:01.750 "ddgst": false 00:24:01.750 }, 00:24:01.750 "method": "bdev_nvme_attach_controller" 00:24:01.750 }' 00:24:01.750 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.750 [2024-07-16 00:50:19.506285] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.008 [2024-07-16 00:50:19.592103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3118152 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:03.910 00:50:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:04.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3118152 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3117811 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.848 { 00:24:04.848 "params": { 00:24:04.848 "name": "Nvme$subsystem", 00:24:04.848 "trtype": "$TEST_TRANSPORT", 00:24:04.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.848 "adrfam": "ipv4", 00:24:04.848 "trsvcid": "$NVMF_PORT", 00:24:04.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.848 "hdgst": ${hdgst:-false}, 00:24:04.848 "ddgst": ${ddgst:-false} 00:24:04.848 }, 00:24:04.848 "method": "bdev_nvme_attach_controller" 00:24:04.848 } 00:24:04.848 EOF 00:24:04.848 )") 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.848 { 00:24:04.848 "params": { 00:24:04.848 "name": "Nvme$subsystem", 00:24:04.848 "trtype": "$TEST_TRANSPORT", 00:24:04.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.848 "adrfam": "ipv4", 00:24:04.848 "trsvcid": "$NVMF_PORT", 00:24:04.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.848 "hdgst": ${hdgst:-false}, 00:24:04.848 "ddgst": ${ddgst:-false} 00:24:04.848 }, 00:24:04.848 "method": "bdev_nvme_attach_controller" 00:24:04.848 } 00:24:04.848 EOF 00:24:04.848 )") 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.848 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.848 { 00:24:04.848 "params": { 00:24:04.848 "name": "Nvme$subsystem", 00:24:04.848 "trtype": "$TEST_TRANSPORT", 00:24:04.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.848 "adrfam": "ipv4", 00:24:04.848 "trsvcid": "$NVMF_PORT", 00:24:04.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.848 "hdgst": ${hdgst:-false}, 00:24:04.848 "ddgst": ${ddgst:-false} 00:24:04.848 }, 00:24:04.848 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 [2024-07-16 00:50:22.454793] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:04.849 [2024-07-16 00:50:22.454857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118703 ] 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:04.849 { 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme$subsystem", 00:24:04.849 "trtype": "$TEST_TRANSPORT", 00:24:04.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "$NVMF_PORT", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.849 "hdgst": ${hdgst:-false}, 00:24:04.849 "ddgst": ${ddgst:-false} 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 } 00:24:04.849 EOF 00:24:04.849 )") 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:04.849 00:50:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme1", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme2", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme3", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme4", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme5", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme6", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:04.849 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:04.849 "hdgst": false, 00:24:04.849 "ddgst": false 00:24:04.849 }, 00:24:04.849 "method": "bdev_nvme_attach_controller" 00:24:04.849 },{ 00:24:04.849 "params": { 00:24:04.849 "name": "Nvme7", 00:24:04.849 "trtype": "tcp", 00:24:04.849 "traddr": "10.0.0.2", 00:24:04.849 "adrfam": "ipv4", 00:24:04.849 "trsvcid": "4420", 00:24:04.849 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:04.850 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:04.850 "hdgst": false, 00:24:04.850 "ddgst": false 00:24:04.850 }, 00:24:04.850 "method": "bdev_nvme_attach_controller" 00:24:04.850 },{ 00:24:04.850 "params": { 00:24:04.850 "name": "Nvme8", 00:24:04.850 "trtype": "tcp", 00:24:04.850 "traddr": "10.0.0.2", 00:24:04.850 "adrfam": "ipv4", 00:24:04.850 "trsvcid": "4420", 00:24:04.850 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:04.850 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:04.850 "hdgst": false, 00:24:04.850 "ddgst": false 00:24:04.850 }, 00:24:04.850 "method": "bdev_nvme_attach_controller" 00:24:04.850 },{ 00:24:04.850 "params": { 00:24:04.850 "name": "Nvme9", 00:24:04.850 "trtype": "tcp", 00:24:04.850 "traddr": "10.0.0.2", 00:24:04.850 "adrfam": "ipv4", 00:24:04.850 "trsvcid": "4420", 00:24:04.850 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:04.850 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:04.850 "hdgst": false, 00:24:04.850 "ddgst": false 00:24:04.850 }, 00:24:04.850 "method": "bdev_nvme_attach_controller" 00:24:04.850 },{ 00:24:04.850 "params": { 00:24:04.850 "name": "Nvme10", 00:24:04.850 "trtype": "tcp", 00:24:04.850 "traddr": "10.0.0.2", 00:24:04.850 "adrfam": "ipv4", 00:24:04.850 "trsvcid": "4420", 00:24:04.850 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:04.850 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:04.850 "hdgst": false, 00:24:04.850 "ddgst": false 00:24:04.850 }, 00:24:04.850 "method": "bdev_nvme_attach_controller" 00:24:04.850 }' 00:24:04.850 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.850 [2024-07-16 00:50:22.541314] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.850 [2024-07-16 00:50:22.628649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.226 Running I/O for 1 seconds... 00:24:07.603 00:24:07.603 Latency(us) 00:24:07.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme1n1 : 1.14 169.14 10.57 0.00 0.00 373621.60 57909.99 320292.31 00:24:07.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme2n1 : 1.23 156.19 9.76 0.00 0.00 396105.54 56241.80 394645.88 00:24:07.603 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme3n1 : 1.05 182.53 11.41 0.00 0.00 330290.27 33363.78 335544.32 00:24:07.603 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme4n1 : 1.22 261.92 16.37 0.00 0.00 226773.55 16681.89 276442.76 00:24:07.603 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme5n1 : 1.18 162.39 10.15 0.00 0.00 356765.94 45756.04 274536.26 00:24:07.603 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme6n1 : 1.21 158.14 9.88 0.00 0.00 360278.57 49330.73 362235.35 00:24:07.603 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme7n1 : 1.23 208.82 13.05 0.00 0.00 267596.80 32648.84 289788.28 00:24:07.603 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme8n1 : 1.20 217.39 13.59 0.00 0.00 249435.19 3813.00 305040.29 00:24:07.603 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme9n1 : 1.24 210.50 13.16 0.00 0.00 253954.29 16801.05 301227.29 00:24:07.603 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.603 Verification LBA range: start 0x0 length 0x400 00:24:07.603 Nvme10n1 : 1.24 217.80 13.61 0.00 0.00 239524.35 4349.21 322198.81 00:24:07.603 =================================================================================================================== 00:24:07.603 Total : 1944.82 121.55 0.00 0.00 294669.25 3813.00 394645.88 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.863 rmmod nvme_tcp 00:24:07.863 rmmod nvme_fabrics 00:24:07.863 rmmod nvme_keyring 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3117811 ']' 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3117811 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3117811 ']' 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3117811 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3117811 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3117811' 00:24:07.863 killing process with pid 3117811 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3117811 00:24:07.863 00:50:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3117811 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.432 00:50:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.968 00:24:10.968 real 0m16.426s 00:24:10.968 user 0m38.756s 00:24:10.968 sys 0m5.961s 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.968 ************************************ 00:24:10.968 END TEST nvmf_shutdown_tc1 00:24:10.968 ************************************ 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:10.968 ************************************ 00:24:10.968 START TEST nvmf_shutdown_tc2 00:24:10.968 ************************************ 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.968 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.968 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.968 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.968 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.968 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:24:10.969 00:24:10.969 --- 10.0.0.2 ping statistics --- 00:24:10.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.969 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:24:10.969 00:24:10.969 --- 10.0.0.1 ping statistics --- 00:24:10.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.969 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3119856 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3119856 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3119856 ']' 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.969 00:50:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.969 [2024-07-16 00:50:28.749631] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:10.969 [2024-07-16 00:50:28.749687] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.228 [2024-07-16 00:50:28.837404] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.228 [2024-07-16 00:50:28.943873] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.228 [2024-07-16 00:50:28.943919] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.228 [2024-07-16 00:50:28.943932] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.228 [2024-07-16 00:50:28.943943] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.228 [2024-07-16 00:50:28.943953] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.228 [2024-07-16 00:50:28.944074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.228 [2024-07-16 00:50:28.944185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.228 [2024-07-16 00:50:28.944297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.228 [2024-07-16 00:50:28.944299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.163 [2024-07-16 00:50:29.727641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.163 00:50:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.163 Malloc1 00:24:12.163 [2024-07-16 00:50:29.834183] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.163 Malloc2 00:24:12.163 Malloc3 00:24:12.163 Malloc4 00:24:12.163 Malloc5 00:24:12.422 Malloc6 00:24:12.422 Malloc7 00:24:12.422 Malloc8 00:24:12.422 Malloc9 00:24:12.422 Malloc10 00:24:12.422 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.422 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.422 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.422 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3120171 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3120171 /var/tmp/bdevperf.sock 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3120171 ']' 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.682 "adrfam": "ipv4", 00:24:12.682 "trsvcid": "$NVMF_PORT", 00:24:12.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.682 "hdgst": ${hdgst:-false}, 00:24:12.682 "ddgst": ${ddgst:-false} 00:24:12.682 }, 00:24:12.682 "method": "bdev_nvme_attach_controller" 00:24:12.682 } 00:24:12.682 EOF 00:24:12.682 )") 00:24:12.682 [2024-07-16 00:50:30.350227] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:12.682 [2024-07-16 00:50:30.350304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120171 ] 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.682 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.682 { 00:24:12.682 "params": { 00:24:12.682 "name": "Nvme$subsystem", 00:24:12.682 "trtype": "$TEST_TRANSPORT", 00:24:12.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "$NVMF_PORT", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.683 "hdgst": ${hdgst:-false}, 00:24:12.683 "ddgst": ${ddgst:-false} 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 } 00:24:12.683 EOF 00:24:12.683 )") 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.683 { 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme$subsystem", 00:24:12.683 "trtype": "$TEST_TRANSPORT", 00:24:12.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "$NVMF_PORT", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.683 "hdgst": ${hdgst:-false}, 00:24:12.683 "ddgst": ${ddgst:-false} 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 } 00:24:12.683 EOF 00:24:12.683 )") 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:12.683 00:50:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme1", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme2", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme3", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme4", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme5", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme6", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme7", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme8", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme9", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 },{ 00:24:12.683 "params": { 00:24:12.683 "name": "Nvme10", 00:24:12.683 "trtype": "tcp", 00:24:12.683 "traddr": "10.0.0.2", 00:24:12.683 "adrfam": "ipv4", 00:24:12.683 "trsvcid": "4420", 00:24:12.683 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.683 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.683 "hdgst": false, 00:24:12.683 "ddgst": false 00:24:12.683 }, 00:24:12.683 "method": "bdev_nvme_attach_controller" 00:24:12.683 }' 00:24:12.683 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.683 [2024-07-16 00:50:30.436102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.941 [2024-07-16 00:50:30.521842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.319 Running I/O for 10 seconds... 00:24:14.319 00:50:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.319 00:50:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:14.319 00:50:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.319 00:50:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.319 00:50:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:14.578 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:14.836 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:14.836 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.836 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.836 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:14.837 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=139 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 139 -ge 100 ']' 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3120171 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3120171 ']' 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3120171 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3120171 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3120171' 00:24:15.095 killing process with pid 3120171 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3120171 00:24:15.095 00:50:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3120171 00:24:15.354 Received shutdown signal, test time was about 1.051335 seconds 00:24:15.354 00:24:15.354 Latency(us) 00:24:15.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.354 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme1n1 : 1.01 197.84 12.37 0.00 0.00 315359.46 6196.13 333637.82 00:24:15.354 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme2n1 : 1.03 186.83 11.68 0.00 0.00 330096.95 22878.02 320292.31 00:24:15.354 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme3n1 : 1.00 192.35 12.02 0.00 0.00 312385.16 30742.34 312666.30 00:24:15.354 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme4n1 : 1.02 251.43 15.71 0.00 0.00 233260.68 23592.96 295507.78 00:24:15.354 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme5n1 : 1.05 182.80 11.43 0.00 0.00 313791.77 30146.56 291694.78 00:24:15.354 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme6n1 : 1.02 187.69 11.73 0.00 0.00 296801.12 34555.35 276442.76 00:24:15.354 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme7n1 : 1.01 190.69 11.92 0.00 0.00 283791.98 54573.61 241172.48 00:24:15.354 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme8n1 : 0.98 195.60 12.23 0.00 0.00 267478.73 50998.92 291694.78 00:24:15.354 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme9n1 : 1.05 183.02 11.44 0.00 0.00 281723.97 10187.87 339357.32 00:24:15.354 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.354 Verification LBA range: start 0x0 length 0x400 00:24:15.354 Nvme10n1 : 0.99 129.65 8.10 0.00 0.00 377443.61 37415.10 345076.83 00:24:15.354 =================================================================================================================== 00:24:15.354 Total : 1897.91 118.62 0.00 0.00 296485.89 6196.13 345076.83 00:24:15.615 00:50:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.551 rmmod nvme_tcp 00:24:16.551 rmmod nvme_fabrics 00:24:16.551 rmmod nvme_keyring 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3119856 ']' 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3119856 ']' 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3119856' 00:24:16.551 killing process with pid 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3119856 00:24:16.551 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3119856 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.119 00:50:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.655 00:24:19.655 real 0m8.538s 00:24:19.655 user 0m26.245s 00:24:19.655 sys 0m1.503s 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.655 ************************************ 00:24:19.655 END TEST nvmf_shutdown_tc2 00:24:19.655 ************************************ 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:19.655 ************************************ 00:24:19.655 START TEST nvmf_shutdown_tc3 00:24:19.655 ************************************ 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:19.655 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:19.655 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.655 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:19.656 Found net devices under 0000:af:00.0: cvl_0_0 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:19.656 Found net devices under 0000:af:00.1: cvl_0_1 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.656 00:50:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:24:19.656 00:24:19.656 --- 10.0.0.2 ping statistics --- 00:24:19.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.656 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:24:19.656 00:24:19.656 --- 10.0.0.1 ping statistics --- 00:24:19.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.656 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3121495 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3121495 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3121495 ']' 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.656 00:50:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:19.656 [2024-07-16 00:50:37.339103] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:19.656 [2024-07-16 00:50:37.339156] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.656 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.656 [2024-07-16 00:50:37.428083] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.914 [2024-07-16 00:50:37.537195] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.914 [2024-07-16 00:50:37.537243] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.914 [2024-07-16 00:50:37.537261] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.914 [2024-07-16 00:50:37.537273] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.914 [2024-07-16 00:50:37.537282] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.914 [2024-07-16 00:50:37.537431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.914 [2024-07-16 00:50:37.537544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.914 [2024-07-16 00:50:37.537654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.914 [2024-07-16 00:50:37.537656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.481 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.739 [2024-07-16 00:50:38.322723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.739 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.740 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.740 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:20.740 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:20.740 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.740 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.740 Malloc1 00:24:20.740 [2024-07-16 00:50:38.440771] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.740 Malloc2 00:24:20.740 Malloc3 00:24:20.740 Malloc4 00:24:20.997 Malloc5 00:24:20.997 Malloc6 00:24:20.997 Malloc7 00:24:20.997 Malloc8 00:24:20.997 Malloc9 00:24:21.256 Malloc10 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3121876 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3121876 /var/tmp/bdevperf.sock 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3121876 ']' 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.256 { 00:24:21.256 "params": { 00:24:21.256 "name": "Nvme$subsystem", 00:24:21.256 "trtype": "$TEST_TRANSPORT", 00:24:21.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.256 "adrfam": "ipv4", 00:24:21.256 "trsvcid": "$NVMF_PORT", 00:24:21.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.256 "hdgst": ${hdgst:-false}, 00:24:21.256 "ddgst": ${ddgst:-false} 00:24:21.256 }, 00:24:21.256 "method": "bdev_nvme_attach_controller" 00:24:21.256 } 00:24:21.256 EOF 00:24:21.256 )") 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.256 { 00:24:21.256 "params": { 00:24:21.256 "name": "Nvme$subsystem", 00:24:21.256 "trtype": "$TEST_TRANSPORT", 00:24:21.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.256 "adrfam": "ipv4", 00:24:21.256 "trsvcid": "$NVMF_PORT", 00:24:21.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.256 "hdgst": ${hdgst:-false}, 00:24:21.256 "ddgst": ${ddgst:-false} 00:24:21.256 }, 00:24:21.256 "method": "bdev_nvme_attach_controller" 00:24:21.256 } 00:24:21.256 EOF 00:24:21.256 )") 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.256 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.256 { 00:24:21.256 "params": { 00:24:21.256 "name": "Nvme$subsystem", 00:24:21.256 "trtype": "$TEST_TRANSPORT", 00:24:21.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.256 "adrfam": "ipv4", 00:24:21.256 "trsvcid": "$NVMF_PORT", 00:24:21.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.256 "hdgst": ${hdgst:-false}, 00:24:21.256 "ddgst": ${ddgst:-false} 00:24:21.256 }, 00:24:21.256 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 [2024-07-16 00:50:38.982708] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:21.257 [2024-07-16 00:50:38.982778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121876 ] 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.257 00:50:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.257 { 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme$subsystem", 00:24:21.257 "trtype": "$TEST_TRANSPORT", 00:24:21.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "$NVMF_PORT", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.257 "hdgst": ${hdgst:-false}, 00:24:21.257 "ddgst": ${ddgst:-false} 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 } 00:24:21.257 EOF 00:24:21.257 )") 00:24:21.257 00:50:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:21.257 00:50:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:21.257 00:50:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:21.257 00:50:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme1", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme2", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme3", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme4", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme5", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.257 "name": "Nvme6", 00:24:21.257 "trtype": "tcp", 00:24:21.257 "traddr": "10.0.0.2", 00:24:21.257 "adrfam": "ipv4", 00:24:21.257 "trsvcid": "4420", 00:24:21.257 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:21.257 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:21.257 "hdgst": false, 00:24:21.257 "ddgst": false 00:24:21.257 }, 00:24:21.257 "method": "bdev_nvme_attach_controller" 00:24:21.257 },{ 00:24:21.257 "params": { 00:24:21.258 "name": "Nvme7", 00:24:21.258 "trtype": "tcp", 00:24:21.258 "traddr": "10.0.0.2", 00:24:21.258 "adrfam": "ipv4", 00:24:21.258 "trsvcid": "4420", 00:24:21.258 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:21.258 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:21.258 "hdgst": false, 00:24:21.258 "ddgst": false 00:24:21.258 }, 00:24:21.258 "method": "bdev_nvme_attach_controller" 00:24:21.258 },{ 00:24:21.258 "params": { 00:24:21.258 "name": "Nvme8", 00:24:21.258 "trtype": "tcp", 00:24:21.258 "traddr": "10.0.0.2", 00:24:21.258 "adrfam": "ipv4", 00:24:21.258 "trsvcid": "4420", 00:24:21.258 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:21.258 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:21.258 "hdgst": false, 00:24:21.258 "ddgst": false 00:24:21.258 }, 00:24:21.258 "method": "bdev_nvme_attach_controller" 00:24:21.258 },{ 00:24:21.258 "params": { 00:24:21.258 "name": "Nvme9", 00:24:21.258 "trtype": "tcp", 00:24:21.258 "traddr": "10.0.0.2", 00:24:21.258 "adrfam": "ipv4", 00:24:21.258 "trsvcid": "4420", 00:24:21.258 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:21.258 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:21.258 "hdgst": false, 00:24:21.258 "ddgst": false 00:24:21.258 }, 00:24:21.258 "method": "bdev_nvme_attach_controller" 00:24:21.258 },{ 00:24:21.258 "params": { 00:24:21.258 "name": "Nvme10", 00:24:21.258 "trtype": "tcp", 00:24:21.258 "traddr": "10.0.0.2", 00:24:21.258 "adrfam": "ipv4", 00:24:21.258 "trsvcid": "4420", 00:24:21.258 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:21.258 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:21.258 "hdgst": false, 00:24:21.258 "ddgst": false 00:24:21.258 }, 00:24:21.258 "method": "bdev_nvme_attach_controller" 00:24:21.258 }' 00:24:21.258 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.258 [2024-07-16 00:50:39.066874] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.515 [2024-07-16 00:50:39.154375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.417 Running I/O for 10 seconds... 00:24:23.417 00:50:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.417 00:50:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:23.417 00:50:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.417 00:50:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.417 00:50:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:23.417 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:23.418 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.675 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.676 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:23.676 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:23.676 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3121495 00:24:23.934 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3121495 ']' 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3121495 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3121495 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:24.208 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3121495' 00:24:24.208 killing process with pid 3121495 00:24:24.209 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3121495 00:24:24.209 00:50:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3121495 00:24:24.209 [2024-07-16 00:50:41.826894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.827983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.828229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74640 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.831835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85100 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.831903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85100 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.831928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c85100 is same with the state(5) to be set 00:24:24.209 [2024-07-16 00:50:41.833876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.833919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.833941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.833953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.833967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.833979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.833992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.834003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.834015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.834025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.834038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.209 [2024-07-16 00:50:41.834049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.209 [2024-07-16 00:50:41.834063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1the state(5) to be set 00:24:24.210 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.210 [2024-07-16 00:50:41.834776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.210 [2024-07-16 00:50:41.834780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.210 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.210 [2024-07-16 00:50:41.834809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1the state(5) to be set 00:24:24.211 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.211 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.834936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1the state(5) to be set 00:24:24.211 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.834953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.834973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.834991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.834994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.211 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128the state(5) to be set 00:24:24.211 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.835079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.211 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.211 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-07-16 00:50:41.835263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.835326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-07-16 00:50:41.835369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.211 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12the state(5) to be set 00:24:24.211 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.211 [2024-07-16 00:50:41.835506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.211 [2024-07-16 00:50:41.835517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.211 [2024-07-16 00:50:41.835515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.835532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.212 [2024-07-16 00:50:41.835536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa12c0 is same with [2024-07-16 00:50:41.835542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.212 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.835559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.212 [2024-07-16 00:50:41.835569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.835581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.212 [2024-07-16 00:50:41.835591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.835623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:24.212 [2024-07-16 00:50:41.836033] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb57d60 was disconnected and freed. reset controller. 00:24:24.212 [2024-07-16 00:50:41.836113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.836298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.212 [2024-07-16 00:50:41.836373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.212 [2024-07-16 00:50:41.836382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a370 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.838921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.838986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.839857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa17c0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.840613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:24.212 [2024-07-16 00:50:41.840655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.212 [2024-07-16 00:50:41.842988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.212 [2024-07-16 00:50:41.843156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.843725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa1ca0 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.844497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.213 [2024-07-16 00:50:41.844534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ae60 with addr=10.0.0.2, port=4420 00:24:24.213 [2024-07-16 00:50:41.844547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.845562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.213 [2024-07-16 00:50:41.846203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.213 [2024-07-16 00:50:41.846229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.213 [2024-07-16 00:50:41.846240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.213 [2024-07-16 00:50:41.846302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-07-16 00:50:41.846329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-07-16 00:50:41.846354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-07-16 00:50:41.846375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-07-16 00:50:41.846397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3b020 is same with the state(5) to be set 00:24:24.213 [2024-07-16 00:50:41.846456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.213 [2024-07-16 00:50:41.846480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.213 [2024-07-16 00:50:41.846490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d9c0 is same with the state(5) to be set 00:24:24.214 [2024-07-16 00:50:41.846571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79100 is same with the state(5) to be set 00:24:24.214 [2024-07-16 00:50:41.846685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.214 [2024-07-16 00:50:41.846762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36040 is same with the state(5) to be set 00:24:24.214 [2024-07-16 00:50:41.846789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a370 (9): Bad file descriptor 00:24:24.214 [2024-07-16 00:50:41.846837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.846981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.846991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.214 [2024-07-16 00:50:41.847525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.214 [2024-07-16 00:50:41.847538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.847985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.847995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd019e0 is same with the state(5) to be set 00:24:24.215 [2024-07-16 00:50:41.848368] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd019e0 was disconnected and freed. reset controller. 00:24:24.215 [2024-07-16 00:50:41.848431] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.215 [2024-07-16 00:50:41.848638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.215 [2024-07-16 00:50:41.848788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.215 [2024-07-16 00:50:41.848798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.848982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.848995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.216 [2024-07-16 00:50:41.849751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.216 [2024-07-16 00:50:41.849764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.849774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.849786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.849797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.849809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.849819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.217 [2024-07-16 00:50:41.856872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.217 [2024-07-16 00:50:41.856946] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcb9f10 was disconnected and freed. reset controller. 00:24:24.217 [2024-07-16 00:50:41.858336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.217 [2024-07-16 00:50:41.858422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3b020 (9): Bad file descriptor 00:24:24.217 [2024-07-16 00:50:41.858478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d9c0 (9): Bad file descriptor 00:24:24.217 [2024-07-16 00:50:41.858503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79100 (9): Bad file descriptor 00:24:24.217 [2024-07-16 00:50:41.858527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36040 (9): Bad file descriptor 00:24:24.217 [2024-07-16 00:50:41.858551] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.217 [2024-07-16 00:50:41.860294] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.217 [2024-07-16 00:50:41.861351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.861992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.217 [2024-07-16 00:50:41.862129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2680 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.862907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.218 [2024-07-16 00:50:41.862947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:24.218 [2024-07-16 00:50:41.863078] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.218 [2024-07-16 00:50:41.863461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:24.218 [2024-07-16 00:50:41.863696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-07-16 00:50:41.863721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a370 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-07-16 00:50:41.863735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a370 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.863927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-07-16 00:50:41.863945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3b020 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-07-16 00:50:41.863958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3b020 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.865335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.218 [2024-07-16 00:50:41.865367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ae60 with addr=10.0.0.2, port=4420 00:24:24.218 [2024-07-16 00:50:41.865380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.218 [2024-07-16 00:50:41.865397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a370 (9): Bad file descriptor 00:24:24.218 [2024-07-16 00:50:41.865414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3b020 (9): Bad file descriptor 00:24:24.218 [2024-07-16 00:50:41.865948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.218 [2024-07-16 00:50:41.865976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.218 [2024-07-16 00:50:41.865988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.218 [2024-07-16 00:50:41.866000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.218 [2024-07-16 00:50:41.866021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:24.218 [2024-07-16 00:50:41.866033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:24.218 [2024-07-16 00:50:41.866044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:24.218 [2024-07-16 00:50:41.866336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.218 [2024-07-16 00:50:41.866355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.218 [2024-07-16 00:50:41.866365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.218 [2024-07-16 00:50:41.866374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.218 [2024-07-16 00:50:41.866385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.218 [2024-07-16 00:50:41.866450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.218 [2024-07-16 00:50:41.866916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.218 [2024-07-16 00:50:41.866928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.866938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.866951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.866961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.866973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.866984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.866997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.219 [2024-07-16 00:50:41.867901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.219 [2024-07-16 00:50:41.867910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.867927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.867937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.867948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc16590 is same with the state(5) to be set 00:24:24.220 [2024-07-16 00:50:41.868004] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc16590 was disconnected and freed. reset controller. 00:24:24.220 [2024-07-16 00:50:41.868282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.220 [2024-07-16 00:50:41.869826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:24.220 [2024-07-16 00:50:41.869882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf970 (9): Bad file descriptor 00:24:24.220 [2024-07-16 00:50:41.870152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.870980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.870993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.871002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.871015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.871025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.871048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.220 [2024-07-16 00:50:41.871060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.220 [2024-07-16 00:50:41.871070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.871646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.871658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02cc0 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.872951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.872995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.873114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.873135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12the state(5) to be set 00:24:24.221 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.873155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 [2024-07-16 00:50:41.873157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.873178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.873177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.221 the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.221 [2024-07-16 00:50:41.873201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.221 [2024-07-16 00:50:41.873211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-07-16 00:50:41.873222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.873402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:12[2024-07-16 00:50:41.873445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.222 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.873592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.222 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.873783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-07-16 00:50:41.873825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.222 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.222 [2024-07-16 00:50:41.873911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.222 [2024-07-16 00:50:41.873923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.222 [2024-07-16 00:50:41.873931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:12the state(5) to be set 00:24:24.222 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.873949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.873952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.873962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.873973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.873971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.873988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.873992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.873999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:24.223 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with [2024-07-16 00:50:41.874038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12the state(5) to be set 00:24:24.223 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.874105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-07-16 00:50:41.874170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:50:41.874232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa2b60 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.874314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.874393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.874405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1e070 is same with the state(5) to be set 00:24:24.223 [2024-07-16 00:50:41.875777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.875979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.875992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.223 [2024-07-16 00:50:41.876133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.223 [2024-07-16 00:50:41.876145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.876984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.877006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.877016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.877030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.877041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.877053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.877063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.224 [2024-07-16 00:50:41.877076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.224 [2024-07-16 00:50:41.877085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.225 [2024-07-16 00:50:41.877220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.225 [2024-07-16 00:50:41.877231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb8b60 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.877993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.878214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3040 is same with the state(5) to be set 00:24:24.225 [2024-07-16 00:50:41.879132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:24.225 [2024-07-16 00:50:41.879158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:24.225 [2024-07-16 00:50:41.879171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:24.225 [2024-07-16 00:50:41.879240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with [2024-07-16 00:50:41.879348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaf970 with[2024-07-16 00:50:41.879367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with addr=10.0.0.2, port=4420 00:24:24.226 the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf970 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879449] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-07-16 00:50:41.879838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd36040 with addr=10.0.0.2, port=4420 00:24:24.226 [2024-07-16 00:50:41.879857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with [2024-07-16 00:50:41.879863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36040 is same wthe state(5) to be set 00:24:24.226 ith the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.879992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 [2024-07-16 00:50:41.880049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79100 with addr=10.0.0.2, port=4420 00:24:24.226 [2024-07-16 00:50:41.880070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79100 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880154] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with [2024-07-16 00:50:41.880260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.226 the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8d9c0 with addr=10.0.0.2, port=4420 00:24:24.226 [2024-07-16 00:50:41.880280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with [2024-07-16 00:50:41.880286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d9c0 is same wthe state(5) to be set 00:24:24.226 ith the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf970 (9): Bad file descriptor 00:24:24.226 [2024-07-16 00:50:41.880302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.880512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa3520 is same with the state(5) to be set 00:24:24.226 [2024-07-16 00:50:41.881406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:24.226 [2024-07-16 00:50:41.881429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.226 [2024-07-16 00:50:41.881446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:24.226 [2024-07-16 00:50:41.881481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36040 (9): Bad file descriptor 00:24:24.226 [2024-07-16 00:50:41.881494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79100 (9): Bad file descriptor 00:24:24.226 [2024-07-16 00:50:41.881506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d9c0 (9): Bad file descriptor 00:24:24.226 [2024-07-16 00:50:41.881518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:24.226 [2024-07-16 00:50:41.881527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:24.226 [2024-07-16 00:50:41.881538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:24.226 [2024-07-16 00:50:41.881573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.226 [2024-07-16 00:50:41.881586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.226 [2024-07-16 00:50:41.881597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf590 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.881696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28cb0 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.881815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.227 [2024-07-16 00:50:41.881894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.881903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66b610 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.881926] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.227 [2024-07-16 00:50:41.882131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.227 [2024-07-16 00:50:41.882298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-07-16 00:50:41.882316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3b020 with addr=10.0.0.2, port=4420 00:24:24.227 [2024-07-16 00:50:41.882327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3b020 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.882450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-07-16 00:50:41.882464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a370 with addr=10.0.0.2, port=4420 00:24:24.227 [2024-07-16 00:50:41.882474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a370 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.882653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.227 [2024-07-16 00:50:41.882666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ae60 with addr=10.0.0.2, port=4420 00:24:24.227 [2024-07-16 00:50:41.882677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.227 [2024-07-16 00:50:41.882687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:24.227 [2024-07-16 00:50:41.882696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:24.227 [2024-07-16 00:50:41.882706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:24.227 [2024-07-16 00:50:41.882720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:24.227 [2024-07-16 00:50:41.882730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:24.227 [2024-07-16 00:50:41.882739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:24.227 [2024-07-16 00:50:41.882753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:24.227 [2024-07-16 00:50:41.882762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:24.227 [2024-07-16 00:50:41.882772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:24.227 [2024-07-16 00:50:41.882850] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.227 [2024-07-16 00:50:41.882966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.227 [2024-07-16 00:50:41.882985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.227 [2024-07-16 00:50:41.882993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.227 [2024-07-16 00:50:41.883005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3b020 (9): Bad file descriptor 00:24:24.227 [2024-07-16 00:50:41.883018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a370 (9): Bad file descriptor 00:24:24.227 [2024-07-16 00:50:41.883030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.227 [2024-07-16 00:50:41.883082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.227 [2024-07-16 00:50:41.883505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.227 [2024-07-16 00:50:41.883518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.883982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.883992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.228 [2024-07-16 00:50:41.884491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.228 [2024-07-16 00:50:41.884503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-07-16 00:50:41.884515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-07-16 00:50:41.884526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-07-16 00:50:41.884539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-07-16 00:50:41.884549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-07-16 00:50:41.884562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.229 [2024-07-16 00:50:41.884572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.229 [2024-07-16 00:50:41.884584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18fb0 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.884638] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc18fb0 was disconnected and freed. reset controller. 00:24:24.229 [2024-07-16 00:50:41.884715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.884727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.884737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:24.229 [2024-07-16 00:50:41.884751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.884760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.884769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.229 [2024-07-16 00:50:41.884782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.884791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.884801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.229 [2024-07-16 00:50:41.886251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.886277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.886287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.886296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:24.229 [2024-07-16 00:50:41.886313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28cb0 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.886382] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:24.229 [2024-07-16 00:50:41.887002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.887022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28cb0 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.887033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28cb0 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.887076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28cb0 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.887115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.887131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.887141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:24.229 [2024-07-16 00:50:41.887174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.889299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:24.229 [2024-07-16 00:50:41.889318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:24.229 [2024-07-16 00:50:41.889329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:24.229 [2024-07-16 00:50:41.889579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.889597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8d9c0 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.889608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d9c0 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.889726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.889741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79100 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.889752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79100 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.889871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.889886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd36040 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.889897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36040 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.889928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d9c0 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.889942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79100 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.889956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36040 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.889985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.889997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.890007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:24.229 [2024-07-16 00:50:41.890020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.890031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.890041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:24.229 [2024-07-16 00:50:41.890054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.890064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.890073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:24.229 [2024-07-16 00:50:41.890104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.890114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.890123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.891437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf590 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.891470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b610 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.891563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:24.229 [2024-07-16 00:50:41.891614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:24.229 [2024-07-16 00:50:41.891628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.229 [2024-07-16 00:50:41.891640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:24.229 [2024-07-16 00:50:41.891837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.891855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaf970 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.891865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf970 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.892120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.892135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ae60 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.892146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.892260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.892276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a370 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.892287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a370 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.892502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.229 [2024-07-16 00:50:41.892516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3b020 with addr=10.0.0.2, port=4420 00:24:24.229 [2024-07-16 00:50:41.892527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3b020 is same with the state(5) to be set 00:24:24.229 [2024-07-16 00:50:41.892539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf970 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.892571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.892585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a370 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.892597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3b020 (9): Bad file descriptor 00:24:24.229 [2024-07-16 00:50:41.892608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.892617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.892627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:24.229 [2024-07-16 00:50:41.892658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.892668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.892677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.892687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.229 [2024-07-16 00:50:41.892700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.892714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.892723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.229 [2024-07-16 00:50:41.892736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:24.229 [2024-07-16 00:50:41.892745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:24.229 [2024-07-16 00:50:41.892754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:24.229 [2024-07-16 00:50:41.892785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.892795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.892804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.229 [2024-07-16 00:50:41.896480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:24.229 [2024-07-16 00:50:41.896730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-07-16 00:50:41.896748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28cb0 with addr=10.0.0.2, port=4420 00:24:24.230 [2024-07-16 00:50:41.896759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28cb0 is same with the state(5) to be set 00:24:24.230 [2024-07-16 00:50:41.896790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28cb0 (9): Bad file descriptor 00:24:24.230 [2024-07-16 00:50:41.896820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:24.230 [2024-07-16 00:50:41.896831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:24.230 [2024-07-16 00:50:41.896841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:24.230 [2024-07-16 00:50:41.896882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.230 [2024-07-16 00:50:41.899447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:24.230 [2024-07-16 00:50:41.899471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:24.230 [2024-07-16 00:50:41.899517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:24.230 [2024-07-16 00:50:41.899788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-07-16 00:50:41.899806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd36040 with addr=10.0.0.2, port=4420 00:24:24.230 [2024-07-16 00:50:41.899817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36040 is same with the state(5) to be set 00:24:24.230 [2024-07-16 00:50:41.899968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-07-16 00:50:41.899983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79100 with addr=10.0.0.2, port=4420 00:24:24.230 [2024-07-16 00:50:41.899994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79100 is same with the state(5) to be set 00:24:24.230 [2024-07-16 00:50:41.900084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.230 [2024-07-16 00:50:41.900099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8d9c0 with addr=10.0.0.2, port=4420 00:24:24.230 [2024-07-16 00:50:41.900110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d9c0 is same with the state(5) to be set 00:24:24.230 [2024-07-16 00:50:41.900123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36040 (9): Bad file descriptor 00:24:24.230 [2024-07-16 00:50:41.900136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79100 (9): Bad file descriptor 00:24:24.230 [2024-07-16 00:50:41.900173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d9c0 (9): Bad file descriptor 00:24:24.230 [2024-07-16 00:50:41.900186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:24.230 [2024-07-16 00:50:41.900195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:24.230 [2024-07-16 00:50:41.900205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:24.230 [2024-07-16 00:50:41.900218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:24.230 [2024-07-16 00:50:41.900228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:24.230 [2024-07-16 00:50:41.900238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:24.230 [2024-07-16 00:50:41.900276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.230 [2024-07-16 00:50:41.900288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.230 [2024-07-16 00:50:41.900297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:24.230 [2024-07-16 00:50:41.900306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:24.230 [2024-07-16 00:50:41.900316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:24.230 [2024-07-16 00:50:41.900347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.230 [2024-07-16 00:50:41.901552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.901979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.901990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.902002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.902013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.902026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.902037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.230 [2024-07-16 00:50:41.902052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.230 [2024-07-16 00:50:41.902062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.902991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.903003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.903013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.903028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.231 [2024-07-16 00:50:41.903038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.231 [2024-07-16 00:50:41.903050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.903071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17aa0 is same with the state(5) to be set 00:24:24.232 [2024-07-16 00:50:41.904545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.904977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.904992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.232 [2024-07-16 00:50:41.905505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.232 [2024-07-16 00:50:41.905517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.905981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.905994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.906004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.906016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.906027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.906039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.233 [2024-07-16 00:50:41.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.233 [2024-07-16 00:50:41.906061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a2a0 is same with the state(5) to be set 00:24:24.233 [2024-07-16 00:50:41.908154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:24.233 task offset: 19072 on job bdev=Nvme10n1 fails 00:24:24.233 00:24:24.233 Latency(us) 00:24:24.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.233 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme1n1 ended in about 1.05 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme1n1 : 1.05 121.53 7.60 60.77 0.00 346732.61 30980.65 329824.81 00:24:24.233 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme2n1 ended in about 1.07 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme2n1 : 1.07 120.05 7.50 60.03 0.00 343314.93 53620.36 291694.78 00:24:24.233 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme3n1 ended in about 1.07 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme3n1 : 1.07 134.72 8.42 44.91 0.00 333748.29 52428.80 327918.31 00:24:24.233 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme4n1 ended in about 1.07 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme4n1 : 1.07 179.15 11.20 59.72 0.00 246888.73 23235.49 280255.77 00:24:24.233 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme5n1 ended in about 1.05 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme5n1 : 1.05 121.33 7.58 60.66 0.00 315938.75 15609.48 345076.83 00:24:24.233 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme6n1 ended in about 1.06 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme6n1 : 1.06 120.43 7.53 60.21 0.00 310701.92 11617.75 320292.31 00:24:24.233 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme7n1 ended in about 1.10 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme7n1 : 1.10 116.62 7.29 58.31 0.00 314341.31 55765.18 310759.80 00:24:24.233 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme8n1 ended in about 1.08 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme8n1 : 1.08 177.89 11.12 59.30 0.00 225188.01 6106.76 312666.30 00:24:24.233 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme9n1 ended in about 1.10 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme9n1 : 1.10 116.31 7.27 58.15 0.00 299546.07 40274.85 293601.28 00:24:24.233 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.233 Job: Nvme10n1 ended in about 1.03 seconds with error 00:24:24.233 Verification LBA range: start 0x0 length 0x400 00:24:24.233 Nvme10n1 : 1.03 123.81 7.74 61.91 0.00 269549.30 5779.08 322198.81 00:24:24.233 =================================================================================================================== 00:24:24.233 Total : 1331.85 83.24 583.96 0.00 296560.20 5779.08 345076.83 00:24:24.233 [2024-07-16 00:50:41.936722] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:24.233 [2024-07-16 00:50:41.936759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:24.233 [2024-07-16 00:50:41.937190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-07-16 00:50:41.937216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66b610 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-07-16 00:50:41.937229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66b610 is same with the state(5) to be set 00:24:24.233 [2024-07-16 00:50:41.937425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.233 [2024-07-16 00:50:41.937441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaf590 with addr=10.0.0.2, port=4420 00:24:24.233 [2024-07-16 00:50:41.937451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf590 is same with the state(5) to be set 00:24:24.233 [2024-07-16 00:50:41.937505] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.233 [2024-07-16 00:50:41.937521] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.233 [2024-07-16 00:50:41.937537] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.233 [2024-07-16 00:50:41.937551] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.234 [2024-07-16 00:50:41.937565] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.234 [2024-07-16 00:50:41.938230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b610 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.938383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf590 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.938741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:24.234 [2024-07-16 00:50:41.938976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.938994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbaf970 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.939006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf970 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.939217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.939233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3b020 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.939244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3b020 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.939488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.939504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6a370 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.939515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a370 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.939650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.939664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd2ae60 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.939675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ae60 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.939799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.939813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd28cb0 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.939824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd28cb0 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.939834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.939843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.939854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:24.234 [2024-07-16 00:50:41.939869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.939878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.939887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:24.234 [2024-07-16 00:50:41.939938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:24.234 [2024-07-16 00:50:41.939966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.939977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.940119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb79100 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.940129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb79100 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.940251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.940272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd36040 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.940282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd36040 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.940295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbaf970 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3b020 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6a370 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ae60 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd28cb0 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.234 [2024-07-16 00:50:41.940621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8d9c0 with addr=10.0.0.2, port=4420 00:24:24.234 [2024-07-16 00:50:41.940632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d9c0 is same with the state(5) to be set 00:24:24.234 [2024-07-16 00:50:41.940644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb79100 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd36040 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d9c0 (9): Bad file descriptor 00:24:24.234 [2024-07-16 00:50:41.940905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.940944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.940952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:24.234 [2024-07-16 00:50:41.940986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.940997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.234 [2024-07-16 00:50:41.941006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:24.234 [2024-07-16 00:50:41.941016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:24.234 [2024-07-16 00:50:41.941027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:24.234 [2024-07-16 00:50:41.941078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.801 00:50:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:24.801 00:50:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3121876 00:24:25.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3121876) - No such process 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.738 rmmod nvme_tcp 00:24:25.738 rmmod nvme_fabrics 00:24:25.738 rmmod nvme_keyring 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.738 00:50:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.272 00:24:28.272 real 0m8.576s 00:24:28.272 user 0m22.259s 00:24:28.272 sys 0m1.606s 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.272 ************************************ 00:24:28.272 END TEST nvmf_shutdown_tc3 00:24:28.272 ************************************ 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:28.272 00:24:28.272 real 0m33.884s 00:24:28.272 user 1m27.405s 00:24:28.272 sys 0m9.294s 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.272 00:50:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:28.272 ************************************ 00:24:28.272 END TEST nvmf_shutdown 00:24:28.272 ************************************ 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:28.272 00:50:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.272 00:50:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.272 00:50:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:28.272 00:50:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.272 00:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.272 ************************************ 00:24:28.272 START TEST nvmf_multicontroller 00:24:28.272 ************************************ 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:28.272 * Looking for test storage... 00:24:28.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.272 00:50:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.273 00:50:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.840 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.840 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.841 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.841 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.841 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:24:34.841 00:24:34.841 --- 10.0.0.2 ping statistics --- 00:24:34.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.841 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:24:34.841 00:24:34.841 --- 10.0.0.1 ping statistics --- 00:24:34.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.841 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3126221 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3126221 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3126221 ']' 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.841 00:50:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:34.841 [2024-07-16 00:50:51.774381] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:34.841 [2024-07-16 00:50:51.774437] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.841 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.841 [2024-07-16 00:50:51.863184] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:34.841 [2024-07-16 00:50:51.967562] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.841 [2024-07-16 00:50:51.967606] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.841 [2024-07-16 00:50:51.967618] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.841 [2024-07-16 00:50:51.967629] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.841 [2024-07-16 00:50:51.967639] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.841 [2024-07-16 00:50:51.967698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.841 [2024-07-16 00:50:51.967808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.841 [2024-07-16 00:50:51.967810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 [2024-07-16 00:50:52.754111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 Malloc0 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 [2024-07-16 00:50:52.827883] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 [2024-07-16 00:50:52.835812] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 Malloc1 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.101 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3126498 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3126498 /var/tmp/bdevperf.sock 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3126498 ']' 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.102 00:50:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 NVMe0n1 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.681 1 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 request: 00:24:35.681 { 00:24:35.681 "name": "NVMe0", 00:24:35.681 "trtype": "tcp", 00:24:35.681 "traddr": "10.0.0.2", 00:24:35.681 "adrfam": "ipv4", 00:24:35.681 "trsvcid": "4420", 00:24:35.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.681 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:35.681 "hostaddr": "10.0.0.2", 00:24:35.681 "hostsvcid": "60000", 00:24:35.681 "prchk_reftag": false, 00:24:35.681 "prchk_guard": false, 00:24:35.681 "hdgst": false, 00:24:35.681 "ddgst": false, 00:24:35.681 "method": "bdev_nvme_attach_controller", 00:24:35.681 "req_id": 1 00:24:35.681 } 00:24:35.681 Got JSON-RPC error response 00:24:35.681 response: 00:24:35.681 { 00:24:35.681 "code": -114, 00:24:35.681 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:35.681 } 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 request: 00:24:35.681 { 00:24:35.681 "name": "NVMe0", 00:24:35.681 "trtype": "tcp", 00:24:35.681 "traddr": "10.0.0.2", 00:24:35.681 "adrfam": "ipv4", 00:24:35.681 "trsvcid": "4420", 00:24:35.681 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:35.681 "hostaddr": "10.0.0.2", 00:24:35.681 "hostsvcid": "60000", 00:24:35.681 "prchk_reftag": false, 00:24:35.681 "prchk_guard": false, 00:24:35.681 "hdgst": false, 00:24:35.681 "ddgst": false, 00:24:35.681 "method": "bdev_nvme_attach_controller", 00:24:35.681 "req_id": 1 00:24:35.681 } 00:24:35.681 Got JSON-RPC error response 00:24:35.681 response: 00:24:35.681 { 00:24:35.681 "code": -114, 00:24:35.681 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:35.681 } 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.681 request: 00:24:35.681 { 00:24:35.681 "name": "NVMe0", 00:24:35.681 "trtype": "tcp", 00:24:35.681 "traddr": "10.0.0.2", 00:24:35.681 "adrfam": "ipv4", 00:24:35.681 "trsvcid": "4420", 00:24:35.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.681 "hostaddr": "10.0.0.2", 00:24:35.681 "hostsvcid": "60000", 00:24:35.681 "prchk_reftag": false, 00:24:35.681 "prchk_guard": false, 00:24:35.681 "hdgst": false, 00:24:35.681 "ddgst": false, 00:24:35.681 "multipath": "disable", 00:24:35.681 "method": "bdev_nvme_attach_controller", 00:24:35.681 "req_id": 1 00:24:35.681 } 00:24:35.681 Got JSON-RPC error response 00:24:35.681 response: 00:24:35.681 { 00:24:35.681 "code": -114, 00:24:35.681 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:35.681 } 00:24:35.681 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.940 request: 00:24:35.940 { 00:24:35.940 "name": "NVMe0", 00:24:35.940 "trtype": "tcp", 00:24:35.940 "traddr": "10.0.0.2", 00:24:35.940 "adrfam": "ipv4", 00:24:35.940 "trsvcid": "4420", 00:24:35.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.940 "hostaddr": "10.0.0.2", 00:24:35.940 "hostsvcid": "60000", 00:24:35.940 "prchk_reftag": false, 00:24:35.940 "prchk_guard": false, 00:24:35.940 "hdgst": false, 00:24:35.940 "ddgst": false, 00:24:35.940 "multipath": "failover", 00:24:35.940 "method": "bdev_nvme_attach_controller", 00:24:35.940 "req_id": 1 00:24:35.940 } 00:24:35.940 Got JSON-RPC error response 00:24:35.940 response: 00:24:35.940 { 00:24:35.940 "code": -114, 00:24:35.940 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:35.940 } 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.940 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.940 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.198 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:36.198 00:50:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.576 0 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3126498 ']' 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3126498' 00:24:37.576 killing process with pid 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3126498 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:37.576 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:37.576 [2024-07-16 00:50:52.944769] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:37.576 [2024-07-16 00:50:52.944834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126498 ] 00:24:37.576 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.576 [2024-07-16 00:50:53.027224] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.576 [2024-07-16 00:50:53.118747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.576 [2024-07-16 00:50:53.857758] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name a3a8a853-271f-474b-8a18-109db2614240 already exists 00:24:37.576 [2024-07-16 00:50:53.857795] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:a3a8a853-271f-474b-8a18-109db2614240 alias for bdev NVMe1n1 00:24:37.576 [2024-07-16 00:50:53.857807] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:37.576 Running I/O for 1 seconds... 00:24:37.576 00:24:37.576 Latency(us) 00:24:37.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.576 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:37.576 NVMe0n1 : 1.01 7791.25 30.43 0.00 0.00 16393.06 5213.09 30146.56 00:24:37.576 =================================================================================================================== 00:24:37.576 Total : 7791.25 30.43 0.00 0.00 16393.06 5213.09 30146.56 00:24:37.576 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.576 00:24:37.576 Latency(us) 00:24:37.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.576 =================================================================================================================== 00:24:37.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.576 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.576 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.576 rmmod nvme_tcp 00:24:37.576 rmmod nvme_fabrics 00:24:37.576 rmmod nvme_keyring 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3126221 ']' 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3126221 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3126221 ']' 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3126221 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3126221 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3126221' 00:24:37.836 killing process with pid 3126221 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3126221 00:24:37.836 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3126221 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.095 00:50:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.627 00:50:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.627 00:24:40.627 real 0m12.196s 00:24:40.627 user 0m15.735s 00:24:40.627 sys 0m5.301s 00:24:40.627 00:50:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.627 00:50:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.627 ************************************ 00:24:40.627 END TEST nvmf_multicontroller 00:24:40.627 ************************************ 00:24:40.627 00:50:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:40.627 00:50:57 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:40.627 00:50:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:40.627 00:50:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:40.627 00:50:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.627 ************************************ 00:24:40.627 START TEST nvmf_aer 00:24:40.627 ************************************ 00:24:40.627 00:50:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:40.627 * Looking for test storage... 00:24:40.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.627 00:50:58 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.628 00:50:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.898 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.898 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.898 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.899 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.899 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.899 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:24:46.158 00:24:46.158 --- 10.0.0.2 ping statistics --- 00:24:46.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.158 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:24:46.158 00:24:46.158 --- 10.0.0.1 ping statistics --- 00:24:46.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.158 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3130629 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3130629 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3130629 ']' 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.158 00:51:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:46.158 [2024-07-16 00:51:03.973913] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:46.158 [2024-07-16 00:51:03.973975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.418 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.418 [2024-07-16 00:51:04.066283] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.418 [2024-07-16 00:51:04.159799] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.418 [2024-07-16 00:51:04.159845] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.418 [2024-07-16 00:51:04.159856] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.418 [2024-07-16 00:51:04.159865] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.418 [2024-07-16 00:51:04.159872] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.418 [2024-07-16 00:51:04.159932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.418 [2024-07-16 00:51:04.160046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.418 [2024-07-16 00:51:04.160159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.418 [2024-07-16 00:51:04.160159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 [2024-07-16 00:51:04.971168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 Malloc0 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 [2024-07-16 00:51:05.031013] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.357 [ 00:24:47.357 { 00:24:47.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.357 "subtype": "Discovery", 00:24:47.357 "listen_addresses": [], 00:24:47.357 "allow_any_host": true, 00:24:47.357 "hosts": [] 00:24:47.357 }, 00:24:47.357 { 00:24:47.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.357 "subtype": "NVMe", 00:24:47.357 "listen_addresses": [ 00:24:47.357 { 00:24:47.357 "trtype": "TCP", 00:24:47.357 "adrfam": "IPv4", 00:24:47.357 "traddr": "10.0.0.2", 00:24:47.357 "trsvcid": "4420" 00:24:47.357 } 00:24:47.357 ], 00:24:47.357 "allow_any_host": true, 00:24:47.357 "hosts": [], 00:24:47.357 "serial_number": "SPDK00000000000001", 00:24:47.357 "model_number": "SPDK bdev Controller", 00:24:47.357 "max_namespaces": 2, 00:24:47.357 "min_cntlid": 1, 00:24:47.357 "max_cntlid": 65519, 00:24:47.357 "namespaces": [ 00:24:47.357 { 00:24:47.357 "nsid": 1, 00:24:47.357 "bdev_name": "Malloc0", 00:24:47.357 "name": "Malloc0", 00:24:47.357 "nguid": "D188A04FE3F84B0280E159FBCBAC46E0", 00:24:47.357 "uuid": "d188a04f-e3f8-4b02-80e1-59fbcbac46e0" 00:24:47.357 } 00:24:47.357 ] 00:24:47.357 } 00:24:47.357 ] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3130854 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:47.357 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:47.357 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 Malloc1 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 [ 00:24:47.673 { 00:24:47.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.673 "subtype": "Discovery", 00:24:47.673 "listen_addresses": [], 00:24:47.673 "allow_any_host": true, 00:24:47.673 "hosts": [] 00:24:47.673 }, 00:24:47.673 { 00:24:47.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.673 "subtype": "NVMe", 00:24:47.673 "listen_addresses": [ 00:24:47.673 { 00:24:47.673 "trtype": "TCP", 00:24:47.673 "adrfam": "IPv4", 00:24:47.673 "traddr": "10.0.0.2", 00:24:47.673 "trsvcid": "4420" 00:24:47.673 } 00:24:47.673 ], 00:24:47.673 "allow_any_host": true, 00:24:47.673 "hosts": [], 00:24:47.673 "serial_number": "SPDK00000000000001", 00:24:47.673 "model_number": "SPDK bdev Controller", 00:24:47.673 "max_namespaces": 2, 00:24:47.673 "min_cntlid": 1, 00:24:47.673 "max_cntlid": 65519, 00:24:47.673 "namespaces": [ 00:24:47.673 { 00:24:47.673 "nsid": 1, 00:24:47.673 "bdev_name": "Malloc0", 00:24:47.673 "name": "Malloc0", 00:24:47.673 "nguid": "D188A04FE3F84B0280E159FBCBAC46E0", 00:24:47.673 "uuid": "d188a04f-e3f8-4b02-80e1-59fbcbac46e0" 00:24:47.673 }, 00:24:47.673 Asynchronous Event Request test 00:24:47.673 Attaching to 10.0.0.2 00:24:47.673 Attached to 10.0.0.2 00:24:47.673 Registering asynchronous event callbacks... 00:24:47.673 Starting namespace attribute notice tests for all controllers... 00:24:47.673 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:47.673 aer_cb - Changed Namespace 00:24:47.673 Cleaning up... 00:24:47.673 { 00:24:47.673 "nsid": 2, 00:24:47.673 "bdev_name": "Malloc1", 00:24:47.673 "name": "Malloc1", 00:24:47.673 "nguid": "E27911507EB84E7F9B206299A259AF06", 00:24:47.673 "uuid": "e2791150-7eb8-4e7f-9b20-6299a259af06" 00:24:47.673 } 00:24:47.673 ] 00:24:47.673 } 00:24:47.673 ] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3130854 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.673 rmmod nvme_tcp 00:24:47.673 rmmod nvme_fabrics 00:24:47.673 rmmod nvme_keyring 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3130629 ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3130629 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3130629 ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3130629 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.673 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3130629 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3130629' 00:24:47.965 killing process with pid 3130629 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3130629 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3130629 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.965 00:51:05 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.521 00:51:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:50.521 00:24:50.521 real 0m9.817s 00:24:50.521 user 0m7.899s 00:24:50.521 sys 0m4.907s 00:24:50.521 00:51:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.521 00:51:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:50.521 ************************************ 00:24:50.521 END TEST nvmf_aer 00:24:50.521 ************************************ 00:24:50.521 00:51:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.521 00:51:07 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:50.521 00:51:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.521 00:51:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.521 00:51:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.521 ************************************ 00:24:50.521 START TEST nvmf_async_init 00:24:50.521 ************************************ 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:50.521 * Looking for test storage... 00:24:50.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.521 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a4e618723d4e4b81a3962bfcf809f982 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.522 00:51:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:55.800 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.800 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:55.801 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:55.801 Found net devices under 0000:af:00.0: cvl_0_0 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:55.801 Found net devices under 0000:af:00.1: cvl_0_1 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.801 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:24:56.060 00:24:56.060 --- 10.0.0.2 ping statistics --- 00:24:56.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.060 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:56.060 00:24:56.060 --- 10.0.0.1 ping statistics --- 00:24:56.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.060 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3134789 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3134789 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3134789 ']' 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.060 00:51:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:56.318 [2024-07-16 00:51:13.928982] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:24:56.318 [2024-07-16 00:51:13.929043] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.318 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.318 [2024-07-16 00:51:14.017739] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.318 [2024-07-16 00:51:14.103230] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.318 [2024-07-16 00:51:14.103281] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.318 [2024-07-16 00:51:14.103292] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.318 [2024-07-16 00:51:14.103300] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.318 [2024-07-16 00:51:14.103307] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.318 [2024-07-16 00:51:14.103330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.254 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 [2024-07-16 00:51:14.908792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 null0 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a4e618723d4e4b81a3962bfcf809f982 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.255 [2024-07-16 00:51:14.949009] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.255 00:51:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.514 nvme0n1 00:24:57.514 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.514 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:57.514 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.514 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.514 [ 00:24:57.514 { 00:24:57.514 "name": "nvme0n1", 00:24:57.514 "aliases": [ 00:24:57.514 "a4e61872-3d4e-4b81-a396-2bfcf809f982" 00:24:57.514 ], 00:24:57.514 "product_name": "NVMe disk", 00:24:57.514 "block_size": 512, 00:24:57.514 "num_blocks": 2097152, 00:24:57.514 "uuid": "a4e61872-3d4e-4b81-a396-2bfcf809f982", 00:24:57.514 "assigned_rate_limits": { 00:24:57.514 "rw_ios_per_sec": 0, 00:24:57.514 "rw_mbytes_per_sec": 0, 00:24:57.514 "r_mbytes_per_sec": 0, 00:24:57.514 "w_mbytes_per_sec": 0 00:24:57.514 }, 00:24:57.514 "claimed": false, 00:24:57.514 "zoned": false, 00:24:57.514 "supported_io_types": { 00:24:57.514 "read": true, 00:24:57.514 "write": true, 00:24:57.514 "unmap": false, 00:24:57.514 "flush": true, 00:24:57.514 "reset": true, 00:24:57.514 "nvme_admin": true, 00:24:57.514 "nvme_io": true, 00:24:57.514 "nvme_io_md": false, 00:24:57.514 "write_zeroes": true, 00:24:57.514 "zcopy": false, 00:24:57.514 "get_zone_info": false, 00:24:57.514 "zone_management": false, 00:24:57.514 "zone_append": false, 00:24:57.514 "compare": true, 00:24:57.514 "compare_and_write": true, 00:24:57.514 "abort": true, 00:24:57.514 "seek_hole": false, 00:24:57.514 "seek_data": false, 00:24:57.514 "copy": true, 00:24:57.514 "nvme_iov_md": false 00:24:57.514 }, 00:24:57.514 "memory_domains": [ 00:24:57.514 { 00:24:57.514 "dma_device_id": "system", 00:24:57.514 "dma_device_type": 1 00:24:57.514 } 00:24:57.514 ], 00:24:57.514 "driver_specific": { 00:24:57.514 "nvme": [ 00:24:57.514 { 00:24:57.514 "trid": { 00:24:57.514 "trtype": "TCP", 00:24:57.514 "adrfam": "IPv4", 00:24:57.514 "traddr": "10.0.0.2", 00:24:57.514 "trsvcid": "4420", 00:24:57.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:57.514 }, 00:24:57.514 "ctrlr_data": { 00:24:57.514 "cntlid": 1, 00:24:57.514 "vendor_id": "0x8086", 00:24:57.514 "model_number": "SPDK bdev Controller", 00:24:57.514 "serial_number": "00000000000000000000", 00:24:57.514 "firmware_revision": "24.09", 00:24:57.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.514 "oacs": { 00:24:57.514 "security": 0, 00:24:57.514 "format": 0, 00:24:57.514 "firmware": 0, 00:24:57.514 "ns_manage": 0 00:24:57.514 }, 00:24:57.514 "multi_ctrlr": true, 00:24:57.514 "ana_reporting": false 00:24:57.514 }, 00:24:57.515 "vs": { 00:24:57.515 "nvme_version": "1.3" 00:24:57.515 }, 00:24:57.515 "ns_data": { 00:24:57.515 "id": 1, 00:24:57.515 "can_share": true 00:24:57.515 } 00:24:57.515 } 00:24:57.515 ], 00:24:57.515 "mp_policy": "active_passive" 00:24:57.515 } 00:24:57.515 } 00:24:57.515 ] 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 [2024-07-16 00:51:15.206275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:57.515 [2024-07-16 00:51:15.206348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a3b40 (9): Bad file descriptor 00:24:57.515 [2024-07-16 00:51:15.338372] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.515 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 [ 00:24:57.515 { 00:24:57.515 "name": "nvme0n1", 00:24:57.515 "aliases": [ 00:24:57.515 "a4e61872-3d4e-4b81-a396-2bfcf809f982" 00:24:57.515 ], 00:24:57.515 "product_name": "NVMe disk", 00:24:57.515 "block_size": 512, 00:24:57.515 "num_blocks": 2097152, 00:24:57.515 "uuid": "a4e61872-3d4e-4b81-a396-2bfcf809f982", 00:24:57.515 "assigned_rate_limits": { 00:24:57.515 "rw_ios_per_sec": 0, 00:24:57.515 "rw_mbytes_per_sec": 0, 00:24:57.515 "r_mbytes_per_sec": 0, 00:24:57.515 "w_mbytes_per_sec": 0 00:24:57.515 }, 00:24:57.515 "claimed": false, 00:24:57.515 "zoned": false, 00:24:57.515 "supported_io_types": { 00:24:57.515 "read": true, 00:24:57.515 "write": true, 00:24:57.515 "unmap": false, 00:24:57.515 "flush": true, 00:24:57.515 "reset": true, 00:24:57.515 "nvme_admin": true, 00:24:57.515 "nvme_io": true, 00:24:57.515 "nvme_io_md": false, 00:24:57.515 "write_zeroes": true, 00:24:57.515 "zcopy": false, 00:24:57.515 "get_zone_info": false, 00:24:57.515 "zone_management": false, 00:24:57.515 "zone_append": false, 00:24:57.515 "compare": true, 00:24:57.515 "compare_and_write": true, 00:24:57.515 "abort": true, 00:24:57.515 "seek_hole": false, 00:24:57.515 "seek_data": false, 00:24:57.515 "copy": true, 00:24:57.515 "nvme_iov_md": false 00:24:57.515 }, 00:24:57.515 "memory_domains": [ 00:24:57.515 { 00:24:57.515 "dma_device_id": "system", 00:24:57.515 "dma_device_type": 1 00:24:57.515 } 00:24:57.515 ], 00:24:57.515 "driver_specific": { 00:24:57.515 "nvme": [ 00:24:57.515 { 00:24:57.515 "trid": { 00:24:57.515 "trtype": "TCP", 00:24:57.775 "adrfam": "IPv4", 00:24:57.775 "traddr": "10.0.0.2", 00:24:57.775 "trsvcid": "4420", 00:24:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:57.775 }, 00:24:57.775 "ctrlr_data": { 00:24:57.775 "cntlid": 2, 00:24:57.775 "vendor_id": "0x8086", 00:24:57.775 "model_number": "SPDK bdev Controller", 00:24:57.775 "serial_number": "00000000000000000000", 00:24:57.775 "firmware_revision": "24.09", 00:24:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.775 "oacs": { 00:24:57.775 "security": 0, 00:24:57.775 "format": 0, 00:24:57.775 "firmware": 0, 00:24:57.775 "ns_manage": 0 00:24:57.775 }, 00:24:57.775 "multi_ctrlr": true, 00:24:57.775 "ana_reporting": false 00:24:57.775 }, 00:24:57.775 "vs": { 00:24:57.775 "nvme_version": "1.3" 00:24:57.775 }, 00:24:57.775 "ns_data": { 00:24:57.775 "id": 1, 00:24:57.775 "can_share": true 00:24:57.775 } 00:24:57.775 } 00:24:57.775 ], 00:24:57.775 "mp_policy": "active_passive" 00:24:57.775 } 00:24:57.775 } 00:24:57.775 ] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.YYqOwO1Gob 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.YYqOwO1Gob 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 [2024-07-16 00:51:15.398942] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.775 [2024-07-16 00:51:15.399058] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YYqOwO1Gob 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 [2024-07-16 00:51:15.406953] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YYqOwO1Gob 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 [2024-07-16 00:51:15.414998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.775 [2024-07-16 00:51:15.415042] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:57.775 nvme0n1 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.775 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.775 [ 00:24:57.775 { 00:24:57.775 "name": "nvme0n1", 00:24:57.775 "aliases": [ 00:24:57.775 "a4e61872-3d4e-4b81-a396-2bfcf809f982" 00:24:57.775 ], 00:24:57.775 "product_name": "NVMe disk", 00:24:57.775 "block_size": 512, 00:24:57.775 "num_blocks": 2097152, 00:24:57.775 "uuid": "a4e61872-3d4e-4b81-a396-2bfcf809f982", 00:24:57.775 "assigned_rate_limits": { 00:24:57.775 "rw_ios_per_sec": 0, 00:24:57.775 "rw_mbytes_per_sec": 0, 00:24:57.775 "r_mbytes_per_sec": 0, 00:24:57.775 "w_mbytes_per_sec": 0 00:24:57.776 }, 00:24:57.776 "claimed": false, 00:24:57.776 "zoned": false, 00:24:57.776 "supported_io_types": { 00:24:57.776 "read": true, 00:24:57.776 "write": true, 00:24:57.776 "unmap": false, 00:24:57.776 "flush": true, 00:24:57.776 "reset": true, 00:24:57.776 "nvme_admin": true, 00:24:57.776 "nvme_io": true, 00:24:57.776 "nvme_io_md": false, 00:24:57.776 "write_zeroes": true, 00:24:57.776 "zcopy": false, 00:24:57.776 "get_zone_info": false, 00:24:57.776 "zone_management": false, 00:24:57.776 "zone_append": false, 00:24:57.776 "compare": true, 00:24:57.776 "compare_and_write": true, 00:24:57.776 "abort": true, 00:24:57.776 "seek_hole": false, 00:24:57.776 "seek_data": false, 00:24:57.776 "copy": true, 00:24:57.776 "nvme_iov_md": false 00:24:57.776 }, 00:24:57.776 "memory_domains": [ 00:24:57.776 { 00:24:57.776 "dma_device_id": "system", 00:24:57.776 "dma_device_type": 1 00:24:57.776 } 00:24:57.776 ], 00:24:57.776 "driver_specific": { 00:24:57.776 "nvme": [ 00:24:57.776 { 00:24:57.776 "trid": { 00:24:57.776 "trtype": "TCP", 00:24:57.776 "adrfam": "IPv4", 00:24:57.776 "traddr": "10.0.0.2", 00:24:57.776 "trsvcid": "4421", 00:24:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:57.776 }, 00:24:57.776 "ctrlr_data": { 00:24:57.776 "cntlid": 3, 00:24:57.776 "vendor_id": "0x8086", 00:24:57.776 "model_number": "SPDK bdev Controller", 00:24:57.776 "serial_number": "00000000000000000000", 00:24:57.776 "firmware_revision": "24.09", 00:24:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.776 "oacs": { 00:24:57.776 "security": 0, 00:24:57.776 "format": 0, 00:24:57.776 "firmware": 0, 00:24:57.776 "ns_manage": 0 00:24:57.776 }, 00:24:57.776 "multi_ctrlr": true, 00:24:57.776 "ana_reporting": false 00:24:57.776 }, 00:24:57.776 "vs": { 00:24:57.776 "nvme_version": "1.3" 00:24:57.776 }, 00:24:57.776 "ns_data": { 00:24:57.776 "id": 1, 00:24:57.776 "can_share": true 00:24:57.776 } 00:24:57.776 } 00:24:57.776 ], 00:24:57.776 "mp_policy": "active_passive" 00:24:57.776 } 00:24:57.776 } 00:24:57.776 ] 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.YYqOwO1Gob 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.776 rmmod nvme_tcp 00:24:57.776 rmmod nvme_fabrics 00:24:57.776 rmmod nvme_keyring 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3134789 ']' 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3134789 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3134789 ']' 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3134789 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.776 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3134789 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3134789' 00:24:58.036 killing process with pid 3134789 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3134789 00:24:58.036 [2024-07-16 00:51:15.629580] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:58.036 [2024-07-16 00:51:15.629611] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3134789 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.036 00:51:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.574 00:51:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:00.574 00:25:00.574 real 0m10.036s 00:25:00.574 user 0m3.834s 00:25:00.574 sys 0m4.865s 00:25:00.574 00:51:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.574 00:51:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 ************************************ 00:25:00.574 END TEST nvmf_async_init 00:25:00.574 ************************************ 00:25:00.574 00:51:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:00.574 00:51:17 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:00.574 00:51:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:00.574 00:51:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.574 00:51:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 ************************************ 00:25:00.574 START TEST dma 00:25:00.574 ************************************ 00:25:00.574 00:51:17 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:00.574 * Looking for test storage... 00:25:00.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.574 00:51:18 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.574 00:51:18 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.574 00:51:18 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.574 00:51:18 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.574 00:51:18 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.574 00:51:18 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.574 00:51:18 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.574 00:51:18 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:00.574 00:51:18 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.574 00:51:18 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.574 00:51:18 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:00.574 00:51:18 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:00.574 00:25:00.574 real 0m0.117s 00:25:00.574 user 0m0.065s 00:25:00.574 sys 0m0.060s 00:25:00.574 00:51:18 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:00.574 00:51:18 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 ************************************ 00:25:00.574 END TEST dma 00:25:00.574 ************************************ 00:25:00.574 00:51:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:00.574 00:51:18 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:00.574 00:51:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:00.574 00:51:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.574 00:51:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 ************************************ 00:25:00.574 START TEST nvmf_identify 00:25:00.574 ************************************ 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:00.574 * Looking for test storage... 00:25:00.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.574 00:51:18 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:00.575 00:51:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:07.152 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:07.152 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:07.152 Found net devices under 0000:af:00.0: cvl_0_0 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:07.152 Found net devices under 0000:af:00.1: cvl_0_1 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.152 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.153 00:51:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:25:07.153 00:25:07.153 --- 10.0.0.2 ping statistics --- 00:25:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.153 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:25:07.153 00:25:07.153 --- 10.0.0.1 ping statistics --- 00:25:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.153 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3138815 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3138815 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3138815 ']' 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.153 00:51:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.153 [2024-07-16 00:51:24.148331] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:07.153 [2024-07-16 00:51:24.148386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.153 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.153 [2024-07-16 00:51:24.237653] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.153 [2024-07-16 00:51:24.327262] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.153 [2024-07-16 00:51:24.327307] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.153 [2024-07-16 00:51:24.327317] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.153 [2024-07-16 00:51:24.327326] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.153 [2024-07-16 00:51:24.327334] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.153 [2024-07-16 00:51:24.327436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.153 [2024-07-16 00:51:24.327547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.153 [2024-07-16 00:51:24.327634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.153 [2024-07-16 00:51:24.327635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 [2024-07-16 00:51:25.102077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 Malloc0 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 [2024-07-16 00:51:25.206059] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.413 [ 00:25:07.413 { 00:25:07.413 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:07.413 "subtype": "Discovery", 00:25:07.413 "listen_addresses": [ 00:25:07.413 { 00:25:07.413 "trtype": "TCP", 00:25:07.413 "adrfam": "IPv4", 00:25:07.413 "traddr": "10.0.0.2", 00:25:07.413 "trsvcid": "4420" 00:25:07.413 } 00:25:07.413 ], 00:25:07.413 "allow_any_host": true, 00:25:07.413 "hosts": [] 00:25:07.413 }, 00:25:07.413 { 00:25:07.413 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.413 "subtype": "NVMe", 00:25:07.413 "listen_addresses": [ 00:25:07.413 { 00:25:07.413 "trtype": "TCP", 00:25:07.413 "adrfam": "IPv4", 00:25:07.413 "traddr": "10.0.0.2", 00:25:07.413 "trsvcid": "4420" 00:25:07.413 } 00:25:07.413 ], 00:25:07.413 "allow_any_host": true, 00:25:07.413 "hosts": [], 00:25:07.413 "serial_number": "SPDK00000000000001", 00:25:07.413 "model_number": "SPDK bdev Controller", 00:25:07.413 "max_namespaces": 32, 00:25:07.413 "min_cntlid": 1, 00:25:07.413 "max_cntlid": 65519, 00:25:07.413 "namespaces": [ 00:25:07.413 { 00:25:07.413 "nsid": 1, 00:25:07.413 "bdev_name": "Malloc0", 00:25:07.413 "name": "Malloc0", 00:25:07.413 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:07.413 "eui64": "ABCDEF0123456789", 00:25:07.413 "uuid": "5ede17d7-63a4-4318-aab7-3fb5a30cb8ef" 00:25:07.413 } 00:25:07.413 ] 00:25:07.413 } 00:25:07.413 ] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.413 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:07.677 [2024-07-16 00:51:25.262199] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:07.677 [2024-07-16 00:51:25.262235] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139095 ] 00:25:07.677 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.677 [2024-07-16 00:51:25.302228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:07.677 [2024-07-16 00:51:25.302282] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:07.677 [2024-07-16 00:51:25.302289] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:07.677 [2024-07-16 00:51:25.302306] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:07.677 [2024-07-16 00:51:25.302314] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:07.677 [2024-07-16 00:51:25.303014] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:07.677 [2024-07-16 00:51:25.303054] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f4eec0 0 00:25:07.677 [2024-07-16 00:51:25.309266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:07.677 [2024-07-16 00:51:25.309283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:07.677 [2024-07-16 00:51:25.309289] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:07.677 [2024-07-16 00:51:25.309294] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:07.677 [2024-07-16 00:51:25.309329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.677 [2024-07-16 00:51:25.309336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.677 [2024-07-16 00:51:25.309341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.677 [2024-07-16 00:51:25.309357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.677 [2024-07-16 00:51:25.309376] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.677 [2024-07-16 00:51:25.317267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.677 [2024-07-16 00:51:25.317279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.677 [2024-07-16 00:51:25.317284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.677 [2024-07-16 00:51:25.317289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.677 [2024-07-16 00:51:25.317304] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.677 [2024-07-16 00:51:25.317313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:07.677 [2024-07-16 00:51:25.317319] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:07.677 [2024-07-16 00:51:25.317336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.317356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.317373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.317591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.317600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.317605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.317616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:07.678 [2024-07-16 00:51:25.317626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:07.678 [2024-07-16 00:51:25.317635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.317653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.317671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.317816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.317824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.317829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.317840] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:07.678 [2024-07-16 00:51:25.317851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.317859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.317869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.317877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.317891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.318004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.318013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.318017] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.318029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.318041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.318059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.318072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.318184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.318192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.318196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.318207] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:07.678 [2024-07-16 00:51:25.318213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.318223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.318330] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:07.678 [2024-07-16 00:51:25.318337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.318347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.318368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.318382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.318495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.318504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.318508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.318519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.678 [2024-07-16 00:51:25.318529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.318547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.318561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.318670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.318678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.318683] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.318693] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.678 [2024-07-16 00:51:25.318699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:07.678 [2024-07-16 00:51:25.318709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:07.678 [2024-07-16 00:51:25.318719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.678 [2024-07-16 00:51:25.318730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.318743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.678 [2024-07-16 00:51:25.318757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.318931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.678 [2024-07-16 00:51:25.318939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.678 [2024-07-16 00:51:25.318944] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4eec0): datao=0, datal=4096, cccid=0 00:25:07.678 [2024-07-16 00:51:25.318954] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd1fc0) on tqpair(0x1f4eec0): expected_datao=0, payload_size=4096 00:25:07.678 [2024-07-16 00:51:25.318960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318970] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.318978] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.319022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.319026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.319040] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:07.678 [2024-07-16 00:51:25.319046] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:07.678 [2024-07-16 00:51:25.319052] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:07.678 [2024-07-16 00:51:25.319058] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:07.678 [2024-07-16 00:51:25.319064] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:07.678 [2024-07-16 00:51:25.319070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:07.678 [2024-07-16 00:51:25.319081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.678 [2024-07-16 00:51:25.319093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.319112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.678 [2024-07-16 00:51:25.319126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.678 [2024-07-16 00:51:25.319250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.678 [2024-07-16 00:51:25.319266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.678 [2024-07-16 00:51:25.319270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.678 [2024-07-16 00:51:25.319284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.678 [2024-07-16 00:51:25.319293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f4eec0) 00:25:07.678 [2024-07-16 00:51:25.319300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.679 [2024-07-16 00:51:25.319308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.319325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.679 [2024-07-16 00:51:25.319332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.319348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.679 [2024-07-16 00:51:25.319356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319363] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.319375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.679 [2024-07-16 00:51:25.319380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.679 [2024-07-16 00:51:25.319394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.679 [2024-07-16 00:51:25.319402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.319416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.679 [2024-07-16 00:51:25.319431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd1fc0, cid 0, qid 0 00:25:07.679 [2024-07-16 00:51:25.319438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2140, cid 1, qid 0 00:25:07.679 [2024-07-16 00:51:25.319444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd22c0, cid 2, qid 0 00:25:07.679 [2024-07-16 00:51:25.319450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.679 [2024-07-16 00:51:25.319456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd25c0, cid 4, qid 0 00:25:07.679 [2024-07-16 00:51:25.319638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.319647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.319651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd25c0) on tqpair=0x1f4eec0 00:25:07.679 [2024-07-16 00:51:25.319662] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:07.679 [2024-07-16 00:51:25.319668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:07.679 [2024-07-16 00:51:25.319681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.319695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.679 [2024-07-16 00:51:25.319709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd25c0, cid 4, qid 0 00:25:07.679 [2024-07-16 00:51:25.319831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.679 [2024-07-16 00:51:25.319840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.679 [2024-07-16 00:51:25.319844] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319848] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4eec0): datao=0, datal=4096, cccid=4 00:25:07.679 [2024-07-16 00:51:25.319854] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd25c0) on tqpair(0x1f4eec0): expected_datao=0, payload_size=4096 00:25:07.679 [2024-07-16 00:51:25.319859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319890] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.319895] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.364282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.364291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd25c0) on tqpair=0x1f4eec0 00:25:07.679 [2024-07-16 00:51:25.364313] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:07.679 [2024-07-16 00:51:25.364341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.364359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.679 [2024-07-16 00:51:25.364367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.364384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.679 [2024-07-16 00:51:25.364404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd25c0, cid 4, qid 0 00:25:07.679 [2024-07-16 00:51:25.364411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2740, cid 5, qid 0 00:25:07.679 [2024-07-16 00:51:25.364804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.679 [2024-07-16 00:51:25.364812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.679 [2024-07-16 00:51:25.364817] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364822] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4eec0): datao=0, datal=1024, cccid=4 00:25:07.679 [2024-07-16 00:51:25.364827] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd25c0) on tqpair(0x1f4eec0): expected_datao=0, payload_size=1024 00:25:07.679 [2024-07-16 00:51:25.364833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364846] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.364860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.364865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.364869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2740) on tqpair=0x1f4eec0 00:25:07.679 [2024-07-16 00:51:25.405501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.405515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.405520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd25c0) on tqpair=0x1f4eec0 00:25:07.679 [2024-07-16 00:51:25.405545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.405562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.679 [2024-07-16 00:51:25.405584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd25c0, cid 4, qid 0 00:25:07.679 [2024-07-16 00:51:25.405718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.679 [2024-07-16 00:51:25.405727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.679 [2024-07-16 00:51:25.405731] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405736] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4eec0): datao=0, datal=3072, cccid=4 00:25:07.679 [2024-07-16 00:51:25.405745] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd25c0) on tqpair(0x1f4eec0): expected_datao=0, payload_size=3072 00:25:07.679 [2024-07-16 00:51:25.405751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405760] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405764] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.405822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.405826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd25c0) on tqpair=0x1f4eec0 00:25:07.679 [2024-07-16 00:51:25.405842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.405847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f4eec0) 00:25:07.679 [2024-07-16 00:51:25.405855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.679 [2024-07-16 00:51:25.405874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd25c0, cid 4, qid 0 00:25:07.679 [2024-07-16 00:51:25.406028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.679 [2024-07-16 00:51:25.406036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.679 [2024-07-16 00:51:25.406040] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.406044] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f4eec0): datao=0, datal=8, cccid=4 00:25:07.679 [2024-07-16 00:51:25.406050] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd25c0) on tqpair(0x1f4eec0): expected_datao=0, payload_size=8 00:25:07.679 [2024-07-16 00:51:25.406055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.406063] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.406068] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.446419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.679 [2024-07-16 00:51:25.446432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.679 [2024-07-16 00:51:25.446437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.679 [2024-07-16 00:51:25.446442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd25c0) on tqpair=0x1f4eec0 00:25:07.679 ===================================================== 00:25:07.679 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:07.679 ===================================================== 00:25:07.679 Controller Capabilities/Features 00:25:07.679 ================================ 00:25:07.679 Vendor ID: 0000 00:25:07.679 Subsystem Vendor ID: 0000 00:25:07.679 Serial Number: .................... 00:25:07.679 Model Number: ........................................ 00:25:07.679 Firmware Version: 24.09 00:25:07.679 Recommended Arb Burst: 0 00:25:07.679 IEEE OUI Identifier: 00 00 00 00:25:07.679 Multi-path I/O 00:25:07.679 May have multiple subsystem ports: No 00:25:07.680 May have multiple controllers: No 00:25:07.680 Associated with SR-IOV VF: No 00:25:07.680 Max Data Transfer Size: 131072 00:25:07.680 Max Number of Namespaces: 0 00:25:07.680 Max Number of I/O Queues: 1024 00:25:07.680 NVMe Specification Version (VS): 1.3 00:25:07.680 NVMe Specification Version (Identify): 1.3 00:25:07.680 Maximum Queue Entries: 128 00:25:07.680 Contiguous Queues Required: Yes 00:25:07.680 Arbitration Mechanisms Supported 00:25:07.680 Weighted Round Robin: Not Supported 00:25:07.680 Vendor Specific: Not Supported 00:25:07.680 Reset Timeout: 15000 ms 00:25:07.680 Doorbell Stride: 4 bytes 00:25:07.680 NVM Subsystem Reset: Not Supported 00:25:07.680 Command Sets Supported 00:25:07.680 NVM Command Set: Supported 00:25:07.680 Boot Partition: Not Supported 00:25:07.680 Memory Page Size Minimum: 4096 bytes 00:25:07.680 Memory Page Size Maximum: 4096 bytes 00:25:07.680 Persistent Memory Region: Not Supported 00:25:07.680 Optional Asynchronous Events Supported 00:25:07.680 Namespace Attribute Notices: Not Supported 00:25:07.680 Firmware Activation Notices: Not Supported 00:25:07.680 ANA Change Notices: Not Supported 00:25:07.680 PLE Aggregate Log Change Notices: Not Supported 00:25:07.680 LBA Status Info Alert Notices: Not Supported 00:25:07.680 EGE Aggregate Log Change Notices: Not Supported 00:25:07.680 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.680 Zone Descriptor Change Notices: Not Supported 00:25:07.680 Discovery Log Change Notices: Supported 00:25:07.680 Controller Attributes 00:25:07.680 128-bit Host Identifier: Not Supported 00:25:07.680 Non-Operational Permissive Mode: Not Supported 00:25:07.680 NVM Sets: Not Supported 00:25:07.680 Read Recovery Levels: Not Supported 00:25:07.680 Endurance Groups: Not Supported 00:25:07.680 Predictable Latency Mode: Not Supported 00:25:07.680 Traffic Based Keep ALive: Not Supported 00:25:07.680 Namespace Granularity: Not Supported 00:25:07.680 SQ Associations: Not Supported 00:25:07.680 UUID List: Not Supported 00:25:07.680 Multi-Domain Subsystem: Not Supported 00:25:07.680 Fixed Capacity Management: Not Supported 00:25:07.680 Variable Capacity Management: Not Supported 00:25:07.680 Delete Endurance Group: Not Supported 00:25:07.680 Delete NVM Set: Not Supported 00:25:07.680 Extended LBA Formats Supported: Not Supported 00:25:07.680 Flexible Data Placement Supported: Not Supported 00:25:07.680 00:25:07.680 Controller Memory Buffer Support 00:25:07.680 ================================ 00:25:07.680 Supported: No 00:25:07.680 00:25:07.680 Persistent Memory Region Support 00:25:07.680 ================================ 00:25:07.680 Supported: No 00:25:07.680 00:25:07.680 Admin Command Set Attributes 00:25:07.680 ============================ 00:25:07.680 Security Send/Receive: Not Supported 00:25:07.680 Format NVM: Not Supported 00:25:07.680 Firmware Activate/Download: Not Supported 00:25:07.680 Namespace Management: Not Supported 00:25:07.680 Device Self-Test: Not Supported 00:25:07.680 Directives: Not Supported 00:25:07.680 NVMe-MI: Not Supported 00:25:07.680 Virtualization Management: Not Supported 00:25:07.680 Doorbell Buffer Config: Not Supported 00:25:07.680 Get LBA Status Capability: Not Supported 00:25:07.680 Command & Feature Lockdown Capability: Not Supported 00:25:07.680 Abort Command Limit: 1 00:25:07.680 Async Event Request Limit: 4 00:25:07.680 Number of Firmware Slots: N/A 00:25:07.680 Firmware Slot 1 Read-Only: N/A 00:25:07.680 Firmware Activation Without Reset: N/A 00:25:07.680 Multiple Update Detection Support: N/A 00:25:07.680 Firmware Update Granularity: No Information Provided 00:25:07.680 Per-Namespace SMART Log: No 00:25:07.680 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.680 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:07.680 Command Effects Log Page: Not Supported 00:25:07.680 Get Log Page Extended Data: Supported 00:25:07.680 Telemetry Log Pages: Not Supported 00:25:07.680 Persistent Event Log Pages: Not Supported 00:25:07.680 Supported Log Pages Log Page: May Support 00:25:07.680 Commands Supported & Effects Log Page: Not Supported 00:25:07.680 Feature Identifiers & Effects Log Page:May Support 00:25:07.680 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.680 Data Area 4 for Telemetry Log: Not Supported 00:25:07.680 Error Log Page Entries Supported: 128 00:25:07.680 Keep Alive: Not Supported 00:25:07.680 00:25:07.680 NVM Command Set Attributes 00:25:07.680 ========================== 00:25:07.680 Submission Queue Entry Size 00:25:07.680 Max: 1 00:25:07.680 Min: 1 00:25:07.680 Completion Queue Entry Size 00:25:07.680 Max: 1 00:25:07.680 Min: 1 00:25:07.680 Number of Namespaces: 0 00:25:07.680 Compare Command: Not Supported 00:25:07.680 Write Uncorrectable Command: Not Supported 00:25:07.680 Dataset Management Command: Not Supported 00:25:07.680 Write Zeroes Command: Not Supported 00:25:07.680 Set Features Save Field: Not Supported 00:25:07.680 Reservations: Not Supported 00:25:07.680 Timestamp: Not Supported 00:25:07.680 Copy: Not Supported 00:25:07.680 Volatile Write Cache: Not Present 00:25:07.680 Atomic Write Unit (Normal): 1 00:25:07.680 Atomic Write Unit (PFail): 1 00:25:07.680 Atomic Compare & Write Unit: 1 00:25:07.680 Fused Compare & Write: Supported 00:25:07.680 Scatter-Gather List 00:25:07.680 SGL Command Set: Supported 00:25:07.680 SGL Keyed: Supported 00:25:07.680 SGL Bit Bucket Descriptor: Not Supported 00:25:07.680 SGL Metadata Pointer: Not Supported 00:25:07.680 Oversized SGL: Not Supported 00:25:07.680 SGL Metadata Address: Not Supported 00:25:07.680 SGL Offset: Supported 00:25:07.680 Transport SGL Data Block: Not Supported 00:25:07.680 Replay Protected Memory Block: Not Supported 00:25:07.680 00:25:07.680 Firmware Slot Information 00:25:07.680 ========================= 00:25:07.680 Active slot: 0 00:25:07.680 00:25:07.680 00:25:07.680 Error Log 00:25:07.680 ========= 00:25:07.680 00:25:07.680 Active Namespaces 00:25:07.680 ================= 00:25:07.680 Discovery Log Page 00:25:07.680 ================== 00:25:07.680 Generation Counter: 2 00:25:07.680 Number of Records: 2 00:25:07.680 Record Format: 0 00:25:07.680 00:25:07.680 Discovery Log Entry 0 00:25:07.680 ---------------------- 00:25:07.680 Transport Type: 3 (TCP) 00:25:07.680 Address Family: 1 (IPv4) 00:25:07.680 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:07.680 Entry Flags: 00:25:07.680 Duplicate Returned Information: 1 00:25:07.680 Explicit Persistent Connection Support for Discovery: 1 00:25:07.680 Transport Requirements: 00:25:07.680 Secure Channel: Not Required 00:25:07.680 Port ID: 0 (0x0000) 00:25:07.680 Controller ID: 65535 (0xffff) 00:25:07.680 Admin Max SQ Size: 128 00:25:07.680 Transport Service Identifier: 4420 00:25:07.680 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:07.680 Transport Address: 10.0.0.2 00:25:07.680 Discovery Log Entry 1 00:25:07.680 ---------------------- 00:25:07.680 Transport Type: 3 (TCP) 00:25:07.680 Address Family: 1 (IPv4) 00:25:07.680 Subsystem Type: 2 (NVM Subsystem) 00:25:07.680 Entry Flags: 00:25:07.680 Duplicate Returned Information: 0 00:25:07.680 Explicit Persistent Connection Support for Discovery: 0 00:25:07.680 Transport Requirements: 00:25:07.680 Secure Channel: Not Required 00:25:07.680 Port ID: 0 (0x0000) 00:25:07.680 Controller ID: 65535 (0xffff) 00:25:07.680 Admin Max SQ Size: 128 00:25:07.680 Transport Service Identifier: 4420 00:25:07.680 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:07.680 Transport Address: 10.0.0.2 [2024-07-16 00:51:25.446549] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:07.680 [2024-07-16 00:51:25.446562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd1fc0) on tqpair=0x1f4eec0 00:25:07.680 [2024-07-16 00:51:25.446571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.680 [2024-07-16 00:51:25.446578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2140) on tqpair=0x1f4eec0 00:25:07.680 [2024-07-16 00:51:25.446584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.680 [2024-07-16 00:51:25.446590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd22c0) on tqpair=0x1f4eec0 00:25:07.680 [2024-07-16 00:51:25.446596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.680 [2024-07-16 00:51:25.446602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.680 [2024-07-16 00:51:25.446608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.680 [2024-07-16 00:51:25.446619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.680 [2024-07-16 00:51:25.446624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.680 [2024-07-16 00:51:25.446631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.680 [2024-07-16 00:51:25.446641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.680 [2024-07-16 00:51:25.446658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.680 [2024-07-16 00:51:25.446758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.680 [2024-07-16 00:51:25.446767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.680 [2024-07-16 00:51:25.446771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.446776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.446785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.446790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.446794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.446803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.446820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.446991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.446999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447014] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:07.681 [2024-07-16 00:51:25.447020] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:07.681 [2024-07-16 00:51:25.447032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.447179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.447187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.447357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.447366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.447546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.447554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447585] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.447726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.447734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.447891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.447900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.447904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.447920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.447930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.447939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.447952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.448088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.448096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.448100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.448105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.448117] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.448125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.448129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.448138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.448151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.452264] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.452276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.452280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.452285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.452299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.452304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.452309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f4eec0) 00:25:07.681 [2024-07-16 00:51:25.452318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.681 [2024-07-16 00:51:25.452333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2440, cid 3, qid 0 00:25:07.681 [2024-07-16 00:51:25.452538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.681 [2024-07-16 00:51:25.452546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.681 [2024-07-16 00:51:25.452551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.681 [2024-07-16 00:51:25.452556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2440) on tqpair=0x1f4eec0 00:25:07.681 [2024-07-16 00:51:25.452565] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:07.681 00:25:07.681 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:07.681 [2024-07-16 00:51:25.496532] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:07.681 [2024-07-16 00:51:25.496568] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139097 ] 00:25:07.681 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.943 [2024-07-16 00:51:25.535916] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:07.943 [2024-07-16 00:51:25.535971] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:07.943 [2024-07-16 00:51:25.535979] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:07.943 [2024-07-16 00:51:25.535991] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:07.943 [2024-07-16 00:51:25.535998] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:07.943 [2024-07-16 00:51:25.536527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:07.943 [2024-07-16 00:51:25.536561] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x648ec0 0 00:25:07.943 [2024-07-16 00:51:25.551266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:07.943 [2024-07-16 00:51:25.551282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:07.943 [2024-07-16 00:51:25.551290] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:07.943 [2024-07-16 00:51:25.551295] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:07.943 [2024-07-16 00:51:25.551324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.551331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.551336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.551351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.943 [2024-07-16 00:51:25.551371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.559267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.559279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.559284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.559304] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.943 [2024-07-16 00:51:25.559312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:07.943 [2024-07-16 00:51:25.559320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:07.943 [2024-07-16 00:51:25.559334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.559354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.559371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.559642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.559650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.559654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559659] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.559665] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:07.943 [2024-07-16 00:51:25.559676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:07.943 [2024-07-16 00:51:25.559684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.559703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.559716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.559892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.559900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.559904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.559916] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:07.943 [2024-07-16 00:51:25.559927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.559938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.559948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.559956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.559970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.560120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.560129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.560134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.560145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.560158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.560176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.560190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.560347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.560356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.560361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.560371] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:07.943 [2024-07-16 00:51:25.560377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.560387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.560494] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:07.943 [2024-07-16 00:51:25.560500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.560509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.560527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.560541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.560711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.560720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.560724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.560738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.943 [2024-07-16 00:51:25.560750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.560769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.560782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.560937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.943 [2024-07-16 00:51:25.560945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.943 [2024-07-16 00:51:25.560949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.560954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.943 [2024-07-16 00:51:25.560960] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.943 [2024-07-16 00:51:25.560965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:07.943 [2024-07-16 00:51:25.560976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:07.943 [2024-07-16 00:51:25.560990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.943 [2024-07-16 00:51:25.561002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.943 [2024-07-16 00:51:25.561006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.943 [2024-07-16 00:51:25.561015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.943 [2024-07-16 00:51:25.561029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.943 [2024-07-16 00:51:25.561267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.944 [2024-07-16 00:51:25.561276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.944 [2024-07-16 00:51:25.561280] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561285] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=4096, cccid=0 00:25:07.944 [2024-07-16 00:51:25.561291] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cbfc0) on tqpair(0x648ec0): expected_datao=0, payload_size=4096 00:25:07.944 [2024-07-16 00:51:25.561297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561312] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.561363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.561368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.944 [2024-07-16 00:51:25.561381] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:07.944 [2024-07-16 00:51:25.561387] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:07.944 [2024-07-16 00:51:25.561393] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:07.944 [2024-07-16 00:51:25.561398] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:07.944 [2024-07-16 00:51:25.561406] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:07.944 [2024-07-16 00:51:25.561412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.561424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.561435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.944 [2024-07-16 00:51:25.561468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.944 [2024-07-16 00:51:25.561621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.561629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.561634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.944 [2024-07-16 00:51:25.561647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.944 [2024-07-16 00:51:25.561671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.944 [2024-07-16 00:51:25.561695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.944 [2024-07-16 00:51:25.561719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.944 [2024-07-16 00:51:25.561742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.561755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.561763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.561768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.561777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.944 [2024-07-16 00:51:25.561794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cbfc0, cid 0, qid 0 00:25:07.944 [2024-07-16 00:51:25.561801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc140, cid 1, qid 0 00:25:07.944 [2024-07-16 00:51:25.561807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc2c0, cid 2, qid 0 00:25:07.944 [2024-07-16 00:51:25.561813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.944 [2024-07-16 00:51:25.561819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.944 [2024-07-16 00:51:25.562100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.562109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.562113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.944 [2024-07-16 00:51:25.562125] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:07.944 [2024-07-16 00:51:25.562131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.562143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.562151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.562159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.562177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.944 [2024-07-16 00:51:25.562191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.944 [2024-07-16 00:51:25.562376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.562385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.562390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.944 [2024-07-16 00:51:25.562472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.562484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.562494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.562507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.944 [2024-07-16 00:51:25.562521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.944 [2024-07-16 00:51:25.562697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.944 [2024-07-16 00:51:25.562706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.944 [2024-07-16 00:51:25.562711] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562716] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=4096, cccid=4 00:25:07.944 [2024-07-16 00:51:25.562721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc5c0) on tqpair(0x648ec0): expected_datao=0, payload_size=4096 00:25:07.944 [2024-07-16 00:51:25.562730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562760] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.562765] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.607276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.607280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.944 [2024-07-16 00:51:25.607297] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:07.944 [2024-07-16 00:51:25.607311] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.607323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:07.944 [2024-07-16 00:51:25.607333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.944 [2024-07-16 00:51:25.607347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.944 [2024-07-16 00:51:25.607365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.944 [2024-07-16 00:51:25.607636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.944 [2024-07-16 00:51:25.607645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.944 [2024-07-16 00:51:25.607649] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607654] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=4096, cccid=4 00:25:07.944 [2024-07-16 00:51:25.607660] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc5c0) on tqpair(0x648ec0): expected_datao=0, payload_size=4096 00:25:07.944 [2024-07-16 00:51:25.607665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607708] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.607713] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.648414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.944 [2024-07-16 00:51:25.648426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.944 [2024-07-16 00:51:25.648431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.944 [2024-07-16 00:51:25.648437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.648454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.648467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.648478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.648483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.648492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.648508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.945 [2024-07-16 00:51:25.648672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.945 [2024-07-16 00:51:25.648681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.945 [2024-07-16 00:51:25.648688] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.648693] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=4096, cccid=4 00:25:07.945 [2024-07-16 00:51:25.648699] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc5c0) on tqpair(0x648ec0): expected_datao=0, payload_size=4096 00:25:07.945 [2024-07-16 00:51:25.648705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.648751] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.648757] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.689461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.689466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.689482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689531] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:07.945 [2024-07-16 00:51:25.689537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:07.945 [2024-07-16 00:51:25.689543] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:07.945 [2024-07-16 00:51:25.689560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.689575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.689583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.689600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.945 [2024-07-16 00:51:25.689619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.945 [2024-07-16 00:51:25.689626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc740, cid 5, qid 0 00:25:07.945 [2024-07-16 00:51:25.689845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.689853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.689857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.689873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.689881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.689885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc740) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.689902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.689907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.689915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.689929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc740, cid 5, qid 0 00:25:07.945 [2024-07-16 00:51:25.690098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.690106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.690111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc740) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.690127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc740, cid 5, qid 0 00:25:07.945 [2024-07-16 00:51:25.690336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.690345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.690349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc740) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.690366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc740, cid 5, qid 0 00:25:07.945 [2024-07-16 00:51:25.690566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.945 [2024-07-16 00:51:25.690574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.945 [2024-07-16 00:51:25.690579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc740) on tqpair=0x648ec0 00:25:07.945 [2024-07-16 00:51:25.690601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.690675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x648ec0) 00:25:07.945 [2024-07-16 00:51:25.690682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.945 [2024-07-16 00:51:25.690697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc740, cid 5, qid 0 00:25:07.945 [2024-07-16 00:51:25.690703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc5c0, cid 4, qid 0 00:25:07.945 [2024-07-16 00:51:25.690710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc8c0, cid 6, qid 0 00:25:07.945 [2024-07-16 00:51:25.690716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cca40, cid 7, qid 0 00:25:07.945 [2024-07-16 00:51:25.691099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.945 [2024-07-16 00:51:25.691108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.945 [2024-07-16 00:51:25.691112] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.691117] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=8192, cccid=5 00:25:07.945 [2024-07-16 00:51:25.691123] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc740) on tqpair(0x648ec0): expected_datao=0, payload_size=8192 00:25:07.945 [2024-07-16 00:51:25.691129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.691225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.691231] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.691238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.945 [2024-07-16 00:51:25.691245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.945 [2024-07-16 00:51:25.691249] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695259] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=512, cccid=4 00:25:07.945 [2024-07-16 00:51:25.695267] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc5c0) on tqpair(0x648ec0): expected_datao=0, payload_size=512 00:25:07.945 [2024-07-16 00:51:25.695273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695281] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695286] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.945 [2024-07-16 00:51:25.695301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.945 [2024-07-16 00:51:25.695305] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695310] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=512, cccid=6 00:25:07.945 [2024-07-16 00:51:25.695315] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cc8c0) on tqpair(0x648ec0): expected_datao=0, payload_size=512 00:25:07.945 [2024-07-16 00:51:25.695321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.945 [2024-07-16 00:51:25.695333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.946 [2024-07-16 00:51:25.695347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.946 [2024-07-16 00:51:25.695352] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695359] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x648ec0): datao=0, datal=4096, cccid=7 00:25:07.946 [2024-07-16 00:51:25.695365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6cca40) on tqpair(0x648ec0): expected_datao=0, payload_size=4096 00:25:07.946 [2024-07-16 00:51:25.695371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695379] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695383] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.946 [2024-07-16 00:51:25.695401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.946 [2024-07-16 00:51:25.695406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc740) on tqpair=0x648ec0 00:25:07.946 [2024-07-16 00:51:25.695426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.946 [2024-07-16 00:51:25.695434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.946 [2024-07-16 00:51:25.695438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc5c0) on tqpair=0x648ec0 00:25:07.946 [2024-07-16 00:51:25.695455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.946 [2024-07-16 00:51:25.695463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.946 [2024-07-16 00:51:25.695467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc8c0) on tqpair=0x648ec0 00:25:07.946 [2024-07-16 00:51:25.695481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.946 [2024-07-16 00:51:25.695488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.946 [2024-07-16 00:51:25.695492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.946 [2024-07-16 00:51:25.695497] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cca40) on tqpair=0x648ec0 00:25:07.946 ===================================================== 00:25:07.946 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.946 ===================================================== 00:25:07.946 Controller Capabilities/Features 00:25:07.946 ================================ 00:25:07.946 Vendor ID: 8086 00:25:07.946 Subsystem Vendor ID: 8086 00:25:07.946 Serial Number: SPDK00000000000001 00:25:07.946 Model Number: SPDK bdev Controller 00:25:07.946 Firmware Version: 24.09 00:25:07.946 Recommended Arb Burst: 6 00:25:07.946 IEEE OUI Identifier: e4 d2 5c 00:25:07.946 Multi-path I/O 00:25:07.946 May have multiple subsystem ports: Yes 00:25:07.946 May have multiple controllers: Yes 00:25:07.946 Associated with SR-IOV VF: No 00:25:07.946 Max Data Transfer Size: 131072 00:25:07.946 Max Number of Namespaces: 32 00:25:07.946 Max Number of I/O Queues: 127 00:25:07.946 NVMe Specification Version (VS): 1.3 00:25:07.946 NVMe Specification Version (Identify): 1.3 00:25:07.946 Maximum Queue Entries: 128 00:25:07.946 Contiguous Queues Required: Yes 00:25:07.946 Arbitration Mechanisms Supported 00:25:07.946 Weighted Round Robin: Not Supported 00:25:07.946 Vendor Specific: Not Supported 00:25:07.946 Reset Timeout: 15000 ms 00:25:07.946 Doorbell Stride: 4 bytes 00:25:07.946 NVM Subsystem Reset: Not Supported 00:25:07.946 Command Sets Supported 00:25:07.946 NVM Command Set: Supported 00:25:07.946 Boot Partition: Not Supported 00:25:07.946 Memory Page Size Minimum: 4096 bytes 00:25:07.946 Memory Page Size Maximum: 4096 bytes 00:25:07.946 Persistent Memory Region: Not Supported 00:25:07.946 Optional Asynchronous Events Supported 00:25:07.946 Namespace Attribute Notices: Supported 00:25:07.946 Firmware Activation Notices: Not Supported 00:25:07.946 ANA Change Notices: Not Supported 00:25:07.946 PLE Aggregate Log Change Notices: Not Supported 00:25:07.946 LBA Status Info Alert Notices: Not Supported 00:25:07.946 EGE Aggregate Log Change Notices: Not Supported 00:25:07.946 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.946 Zone Descriptor Change Notices: Not Supported 00:25:07.946 Discovery Log Change Notices: Not Supported 00:25:07.946 Controller Attributes 00:25:07.946 128-bit Host Identifier: Supported 00:25:07.946 Non-Operational Permissive Mode: Not Supported 00:25:07.946 NVM Sets: Not Supported 00:25:07.946 Read Recovery Levels: Not Supported 00:25:07.946 Endurance Groups: Not Supported 00:25:07.946 Predictable Latency Mode: Not Supported 00:25:07.946 Traffic Based Keep ALive: Not Supported 00:25:07.946 Namespace Granularity: Not Supported 00:25:07.946 SQ Associations: Not Supported 00:25:07.946 UUID List: Not Supported 00:25:07.946 Multi-Domain Subsystem: Not Supported 00:25:07.946 Fixed Capacity Management: Not Supported 00:25:07.946 Variable Capacity Management: Not Supported 00:25:07.946 Delete Endurance Group: Not Supported 00:25:07.946 Delete NVM Set: Not Supported 00:25:07.946 Extended LBA Formats Supported: Not Supported 00:25:07.946 Flexible Data Placement Supported: Not Supported 00:25:07.946 00:25:07.946 Controller Memory Buffer Support 00:25:07.946 ================================ 00:25:07.946 Supported: No 00:25:07.946 00:25:07.946 Persistent Memory Region Support 00:25:07.946 ================================ 00:25:07.946 Supported: No 00:25:07.946 00:25:07.946 Admin Command Set Attributes 00:25:07.946 ============================ 00:25:07.946 Security Send/Receive: Not Supported 00:25:07.946 Format NVM: Not Supported 00:25:07.946 Firmware Activate/Download: Not Supported 00:25:07.946 Namespace Management: Not Supported 00:25:07.946 Device Self-Test: Not Supported 00:25:07.946 Directives: Not Supported 00:25:07.946 NVMe-MI: Not Supported 00:25:07.946 Virtualization Management: Not Supported 00:25:07.946 Doorbell Buffer Config: Not Supported 00:25:07.946 Get LBA Status Capability: Not Supported 00:25:07.946 Command & Feature Lockdown Capability: Not Supported 00:25:07.946 Abort Command Limit: 4 00:25:07.946 Async Event Request Limit: 4 00:25:07.946 Number of Firmware Slots: N/A 00:25:07.946 Firmware Slot 1 Read-Only: N/A 00:25:07.946 Firmware Activation Without Reset: N/A 00:25:07.946 Multiple Update Detection Support: N/A 00:25:07.946 Firmware Update Granularity: No Information Provided 00:25:07.946 Per-Namespace SMART Log: No 00:25:07.946 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.946 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:07.946 Command Effects Log Page: Supported 00:25:07.946 Get Log Page Extended Data: Supported 00:25:07.946 Telemetry Log Pages: Not Supported 00:25:07.946 Persistent Event Log Pages: Not Supported 00:25:07.946 Supported Log Pages Log Page: May Support 00:25:07.946 Commands Supported & Effects Log Page: Not Supported 00:25:07.946 Feature Identifiers & Effects Log Page:May Support 00:25:07.946 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.946 Data Area 4 for Telemetry Log: Not Supported 00:25:07.946 Error Log Page Entries Supported: 128 00:25:07.946 Keep Alive: Supported 00:25:07.946 Keep Alive Granularity: 10000 ms 00:25:07.946 00:25:07.946 NVM Command Set Attributes 00:25:07.946 ========================== 00:25:07.946 Submission Queue Entry Size 00:25:07.946 Max: 64 00:25:07.946 Min: 64 00:25:07.946 Completion Queue Entry Size 00:25:07.946 Max: 16 00:25:07.946 Min: 16 00:25:07.946 Number of Namespaces: 32 00:25:07.946 Compare Command: Supported 00:25:07.946 Write Uncorrectable Command: Not Supported 00:25:07.946 Dataset Management Command: Supported 00:25:07.946 Write Zeroes Command: Supported 00:25:07.946 Set Features Save Field: Not Supported 00:25:07.946 Reservations: Supported 00:25:07.946 Timestamp: Not Supported 00:25:07.946 Copy: Supported 00:25:07.946 Volatile Write Cache: Present 00:25:07.946 Atomic Write Unit (Normal): 1 00:25:07.946 Atomic Write Unit (PFail): 1 00:25:07.946 Atomic Compare & Write Unit: 1 00:25:07.946 Fused Compare & Write: Supported 00:25:07.946 Scatter-Gather List 00:25:07.946 SGL Command Set: Supported 00:25:07.946 SGL Keyed: Supported 00:25:07.946 SGL Bit Bucket Descriptor: Not Supported 00:25:07.946 SGL Metadata Pointer: Not Supported 00:25:07.946 Oversized SGL: Not Supported 00:25:07.946 SGL Metadata Address: Not Supported 00:25:07.946 SGL Offset: Supported 00:25:07.946 Transport SGL Data Block: Not Supported 00:25:07.946 Replay Protected Memory Block: Not Supported 00:25:07.946 00:25:07.946 Firmware Slot Information 00:25:07.946 ========================= 00:25:07.946 Active slot: 1 00:25:07.946 Slot 1 Firmware Revision: 24.09 00:25:07.946 00:25:07.946 00:25:07.946 Commands Supported and Effects 00:25:07.946 ============================== 00:25:07.946 Admin Commands 00:25:07.946 -------------- 00:25:07.946 Get Log Page (02h): Supported 00:25:07.946 Identify (06h): Supported 00:25:07.946 Abort (08h): Supported 00:25:07.946 Set Features (09h): Supported 00:25:07.946 Get Features (0Ah): Supported 00:25:07.946 Asynchronous Event Request (0Ch): Supported 00:25:07.946 Keep Alive (18h): Supported 00:25:07.946 I/O Commands 00:25:07.946 ------------ 00:25:07.946 Flush (00h): Supported LBA-Change 00:25:07.946 Write (01h): Supported LBA-Change 00:25:07.946 Read (02h): Supported 00:25:07.946 Compare (05h): Supported 00:25:07.946 Write Zeroes (08h): Supported LBA-Change 00:25:07.946 Dataset Management (09h): Supported LBA-Change 00:25:07.946 Copy (19h): Supported LBA-Change 00:25:07.946 00:25:07.946 Error Log 00:25:07.946 ========= 00:25:07.946 00:25:07.946 Arbitration 00:25:07.947 =========== 00:25:07.947 Arbitration Burst: 1 00:25:07.947 00:25:07.947 Power Management 00:25:07.947 ================ 00:25:07.947 Number of Power States: 1 00:25:07.947 Current Power State: Power State #0 00:25:07.947 Power State #0: 00:25:07.947 Max Power: 0.00 W 00:25:07.947 Non-Operational State: Operational 00:25:07.947 Entry Latency: Not Reported 00:25:07.947 Exit Latency: Not Reported 00:25:07.947 Relative Read Throughput: 0 00:25:07.947 Relative Read Latency: 0 00:25:07.947 Relative Write Throughput: 0 00:25:07.947 Relative Write Latency: 0 00:25:07.947 Idle Power: Not Reported 00:25:07.947 Active Power: Not Reported 00:25:07.947 Non-Operational Permissive Mode: Not Supported 00:25:07.947 00:25:07.947 Health Information 00:25:07.947 ================== 00:25:07.947 Critical Warnings: 00:25:07.947 Available Spare Space: OK 00:25:07.947 Temperature: OK 00:25:07.947 Device Reliability: OK 00:25:07.947 Read Only: No 00:25:07.947 Volatile Memory Backup: OK 00:25:07.947 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:07.947 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:07.947 Available Spare: 0% 00:25:07.947 Available Spare Threshold: 0% 00:25:07.947 Life Percentage Used:[2024-07-16 00:51:25.695614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.695621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.695631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.695648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cca40, cid 7, qid 0 00:25:07.947 [2024-07-16 00:51:25.695916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.695925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.695929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.695934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cca40) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.695971] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:07.947 [2024-07-16 00:51:25.695984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cbfc0) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.695992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.947 [2024-07-16 00:51:25.695998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc140) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.947 [2024-07-16 00:51:25.696010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc2c0) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.947 [2024-07-16 00:51:25.696025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.947 [2024-07-16 00:51:25.696041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.696060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.696076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.696226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.696235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.696240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.696277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.696296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.696467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.696475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.696480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696490] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:07.947 [2024-07-16 00:51:25.696496] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:07.947 [2024-07-16 00:51:25.696508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.696526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.696540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.696687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.696696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.696701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.696736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.696748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.696888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.696897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.696901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.696919] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.696928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.696937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.696950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.697105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.697114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.697118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.697134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.947 [2024-07-16 00:51:25.697153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.947 [2024-07-16 00:51:25.697166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.947 [2024-07-16 00:51:25.697315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.947 [2024-07-16 00:51:25.697324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.947 [2024-07-16 00:51:25.697329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697334] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.947 [2024-07-16 00:51:25.697345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.947 [2024-07-16 00:51:25.697355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.948 [2024-07-16 00:51:25.697364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.948 [2024-07-16 00:51:25.697377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.948 [2024-07-16 00:51:25.701263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.948 [2024-07-16 00:51:25.701274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.948 [2024-07-16 00:51:25.701278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.948 [2024-07-16 00:51:25.701284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.948 [2024-07-16 00:51:25.701297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.948 [2024-07-16 00:51:25.701302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.948 [2024-07-16 00:51:25.701307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x648ec0) 00:25:07.948 [2024-07-16 00:51:25.701316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.948 [2024-07-16 00:51:25.701331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6cc440, cid 3, qid 0 00:25:07.948 [2024-07-16 00:51:25.701587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.948 [2024-07-16 00:51:25.701598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.948 [2024-07-16 00:51:25.701603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.948 [2024-07-16 00:51:25.701608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6cc440) on tqpair=0x648ec0 00:25:07.948 [2024-07-16 00:51:25.701617] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:07.948 0% 00:25:07.948 Data Units Read: 0 00:25:07.948 Data Units Written: 0 00:25:07.948 Host Read Commands: 0 00:25:07.948 Host Write Commands: 0 00:25:07.948 Controller Busy Time: 0 minutes 00:25:07.948 Power Cycles: 0 00:25:07.948 Power On Hours: 0 hours 00:25:07.948 Unsafe Shutdowns: 0 00:25:07.948 Unrecoverable Media Errors: 0 00:25:07.948 Lifetime Error Log Entries: 0 00:25:07.948 Warning Temperature Time: 0 minutes 00:25:07.948 Critical Temperature Time: 0 minutes 00:25:07.948 00:25:07.948 Number of Queues 00:25:07.948 ================ 00:25:07.948 Number of I/O Submission Queues: 127 00:25:07.948 Number of I/O Completion Queues: 127 00:25:07.948 00:25:07.948 Active Namespaces 00:25:07.948 ================= 00:25:07.948 Namespace ID:1 00:25:07.948 Error Recovery Timeout: Unlimited 00:25:07.948 Command Set Identifier: NVM (00h) 00:25:07.948 Deallocate: Supported 00:25:07.948 Deallocated/Unwritten Error: Not Supported 00:25:07.948 Deallocated Read Value: Unknown 00:25:07.948 Deallocate in Write Zeroes: Not Supported 00:25:07.948 Deallocated Guard Field: 0xFFFF 00:25:07.948 Flush: Supported 00:25:07.948 Reservation: Supported 00:25:07.948 Namespace Sharing Capabilities: Multiple Controllers 00:25:07.948 Size (in LBAs): 131072 (0GiB) 00:25:07.948 Capacity (in LBAs): 131072 (0GiB) 00:25:07.948 Utilization (in LBAs): 131072 (0GiB) 00:25:07.948 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:07.948 EUI64: ABCDEF0123456789 00:25:07.948 UUID: 5ede17d7-63a4-4318-aab7-3fb5a30cb8ef 00:25:07.948 Thin Provisioning: Not Supported 00:25:07.948 Per-NS Atomic Units: Yes 00:25:07.948 Atomic Boundary Size (Normal): 0 00:25:07.948 Atomic Boundary Size (PFail): 0 00:25:07.948 Atomic Boundary Offset: 0 00:25:07.948 Maximum Single Source Range Length: 65535 00:25:07.948 Maximum Copy Length: 65535 00:25:07.948 Maximum Source Range Count: 1 00:25:07.948 NGUID/EUI64 Never Reused: No 00:25:07.948 Namespace Write Protected: No 00:25:07.948 Number of LBA Formats: 1 00:25:07.948 Current LBA Format: LBA Format #00 00:25:07.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:07.948 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.948 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.948 rmmod nvme_tcp 00:25:07.948 rmmod nvme_fabrics 00:25:07.948 rmmod nvme_keyring 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3138815 ']' 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3138815 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3138815 ']' 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3138815 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3138815 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3138815' 00:25:08.206 killing process with pid 3138815 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3138815 00:25:08.206 00:51:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3138815 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.465 00:51:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.371 00:51:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.371 00:25:10.371 real 0m10.014s 00:25:10.371 user 0m8.694s 00:25:10.371 sys 0m4.883s 00:25:10.371 00:51:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.371 00:51:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.371 ************************************ 00:25:10.371 END TEST nvmf_identify 00:25:10.371 ************************************ 00:25:10.371 00:51:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:10.371 00:51:28 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:10.371 00:51:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:10.371 00:51:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.371 00:51:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.630 ************************************ 00:25:10.630 START TEST nvmf_perf 00:25:10.630 ************************************ 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:10.630 * Looking for test storage... 00:25:10.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.630 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.631 00:51:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:17.200 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:17.200 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:17.200 Found net devices under 0000:af:00.0: cvl_0_0 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:17.200 Found net devices under 0000:af:00.1: cvl_0_1 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:17.200 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.201 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.201 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.201 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.201 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:17.201 00:51:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:17.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:25:17.201 00:25:17.201 --- 10.0.0.2 ping statistics --- 00:25:17.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.201 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:25:17.201 00:25:17.201 --- 10.0.0.1 ping statistics --- 00:25:17.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.201 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3142680 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3142680 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3142680 ']' 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.201 00:51:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:17.201 [2024-07-16 00:51:34.169985] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:17.201 [2024-07-16 00:51:34.170039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.201 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.201 [2024-07-16 00:51:34.258390] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.201 [2024-07-16 00:51:34.346794] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.201 [2024-07-16 00:51:34.346839] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.201 [2024-07-16 00:51:34.346853] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.201 [2024-07-16 00:51:34.346862] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.201 [2024-07-16 00:51:34.346869] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.201 [2024-07-16 00:51:34.346943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.201 [2024-07-16 00:51:34.347056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.201 [2024-07-16 00:51:34.347179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:17.201 [2024-07-16 00:51:34.347179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:17.460 00:51:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:20.746 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:20.746 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:20.746 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:86:00.0 00:25:20.746 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:21.005 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:21.005 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:86:00.0 ']' 00:25:21.005 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:21.005 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:21.005 00:51:38 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:21.263 [2024-07-16 00:51:39.043488] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.263 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.521 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:21.521 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.780 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:21.780 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:22.038 00:51:39 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.296 [2024-07-16 00:51:40.080016] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.296 00:51:40 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:22.555 00:51:40 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:86:00.0 ']' 00:25:22.555 00:51:40 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:22.555 00:51:40 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:22.555 00:51:40 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:86:00.0' 00:25:23.931 Initializing NVMe Controllers 00:25:23.931 Attached to NVMe Controller at 0000:86:00.0 [8086:0a54] 00:25:23.931 Associating PCIE (0000:86:00.0) NSID 1 with lcore 0 00:25:23.931 Initialization complete. Launching workers. 00:25:23.931 ======================================================== 00:25:23.931 Latency(us) 00:25:23.931 Device Information : IOPS MiB/s Average min max 00:25:23.931 PCIE (0000:86:00.0) NSID 1 from core 0: 69327.22 270.81 461.11 28.02 4420.24 00:25:23.931 ======================================================== 00:25:23.931 Total : 69327.22 270.81 461.11 28.02 4420.24 00:25:23.931 00:25:23.931 00:51:41 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:23.931 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.305 Initializing NVMe Controllers 00:25:25.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.305 Initialization complete. Launching workers. 00:25:25.305 ======================================================== 00:25:25.305 Latency(us) 00:25:25.305 Device Information : IOPS MiB/s Average min max 00:25:25.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 11811.32 237.10 45895.61 00:25:25.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18650.00 7944.64 47907.95 00:25:25.305 ======================================================== 00:25:25.305 Total : 142.00 0.55 14508.26 237.10 47907.95 00:25:25.305 00:25:25.305 00:51:43 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.305 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.204 Initializing NVMe Controllers 00:25:27.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:27.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:27.204 Initialization complete. Launching workers. 00:25:27.204 ======================================================== 00:25:27.204 Latency(us) 00:25:27.204 Device Information : IOPS MiB/s Average min max 00:25:27.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4349.00 16.99 7397.76 1136.10 12650.55 00:25:27.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3843.00 15.01 8371.22 6764.01 16278.49 00:25:27.204 ======================================================== 00:25:27.204 Total : 8192.00 32.00 7854.42 1136.10 16278.49 00:25:27.204 00:25:27.204 00:51:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:27.204 00:51:44 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:27.204 00:51:44 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.204 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.741 Initializing NVMe Controllers 00:25:29.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.741 Controller IO queue size 128, less than required. 00:25:29.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.741 Controller IO queue size 128, less than required. 00:25:29.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.741 Initialization complete. Launching workers. 00:25:29.741 ======================================================== 00:25:29.741 Latency(us) 00:25:29.741 Device Information : IOPS MiB/s Average min max 00:25:29.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1169.98 292.49 112120.76 67554.85 160500.52 00:25:29.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.99 143.75 231213.34 92007.69 362232.51 00:25:29.741 ======================================================== 00:25:29.741 Total : 1744.97 436.24 151363.30 67554.85 362232.51 00:25:29.741 00:25:29.741 00:51:47 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:29.741 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.741 No valid NVMe controllers or AIO or URING devices found 00:25:29.741 Initializing NVMe Controllers 00:25:29.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.741 Controller IO queue size 128, less than required. 00:25:29.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.741 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:29.741 Controller IO queue size 128, less than required. 00:25:29.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:29.741 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:29.741 WARNING: Some requested NVMe devices were skipped 00:25:29.741 00:51:47 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:29.741 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.345 Initializing NVMe Controllers 00:25:32.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.345 Controller IO queue size 128, less than required. 00:25:32.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.345 Controller IO queue size 128, less than required. 00:25:32.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:32.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.345 Initialization complete. Launching workers. 00:25:32.345 00:25:32.345 ==================== 00:25:32.345 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:32.345 TCP transport: 00:25:32.345 polls: 14929 00:25:32.345 idle_polls: 6463 00:25:32.345 sock_completions: 8466 00:25:32.345 nvme_completions: 5293 00:25:32.345 submitted_requests: 8028 00:25:32.345 queued_requests: 1 00:25:32.345 00:25:32.345 ==================== 00:25:32.345 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:32.345 TCP transport: 00:25:32.345 polls: 17170 00:25:32.345 idle_polls: 11225 00:25:32.345 sock_completions: 5945 00:25:32.345 nvme_completions: 4557 00:25:32.345 submitted_requests: 6808 00:25:32.345 queued_requests: 1 00:25:32.345 ======================================================== 00:25:32.345 Latency(us) 00:25:32.345 Device Information : IOPS MiB/s Average min max 00:25:32.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1320.94 330.23 99966.63 59051.82 152529.59 00:25:32.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1137.22 284.31 113722.02 40925.11 176255.36 00:25:32.345 ======================================================== 00:25:32.345 Total : 2458.16 614.54 106330.31 40925.11 176255.36 00:25:32.345 00:25:32.345 00:51:49 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:32.345 00:51:49 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.345 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.345 rmmod nvme_tcp 00:25:32.345 rmmod nvme_fabrics 00:25:32.345 rmmod nvme_keyring 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3142680 ']' 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3142680 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3142680 ']' 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3142680 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3142680 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3142680' 00:25:32.604 killing process with pid 3142680 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3142680 00:25:32.604 00:51:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3142680 00:25:34.506 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.507 00:51:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.414 00:51:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:36.414 00:25:36.414 real 0m25.701s 00:25:36.414 user 1m10.760s 00:25:36.414 sys 0m7.497s 00:25:36.414 00:51:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:36.414 00:51:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:36.414 ************************************ 00:25:36.414 END TEST nvmf_perf 00:25:36.414 ************************************ 00:25:36.414 00:51:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:36.414 00:51:53 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.414 00:51:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:36.414 00:51:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.414 00:51:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:36.414 ************************************ 00:25:36.414 START TEST nvmf_fio_host 00:25:36.414 ************************************ 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:36.414 * Looking for test storage... 00:25:36.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.414 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:36.415 00:51:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:42.986 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:42.986 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:42.986 Found net devices under 0000:af:00.0: cvl_0_0 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:42.986 Found net devices under 0000:af:00.1: cvl_0_1 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.986 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:42.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:25:42.987 00:25:42.987 --- 10.0.0.2 ping statistics --- 00:25:42.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.987 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:42.987 00:25:42.987 --- 10.0.0.1 ping statistics --- 00:25:42.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.987 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3149327 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3149327 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3149327 ']' 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.987 00:51:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 [2024-07-16 00:51:59.951037] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:42.987 [2024-07-16 00:51:59.951095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.987 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.987 [2024-07-16 00:52:00.028791] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.987 [2024-07-16 00:52:00.122927] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.987 [2024-07-16 00:52:00.122972] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.987 [2024-07-16 00:52:00.122982] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.987 [2024-07-16 00:52:00.122991] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.987 [2024-07-16 00:52:00.122998] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.987 [2024-07-16 00:52:00.123051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.987 [2024-07-16 00:52:00.123162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.987 [2024-07-16 00:52:00.123289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.987 [2024-07-16 00:52:00.123290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:42.987 [2024-07-16 00:52:00.459604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:42.987 Malloc1 00:25:42.987 00:52:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.246 00:52:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:43.504 00:52:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.762 [2024-07-16 00:52:01.475379] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.762 00:52:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:44.020 00:52:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.584 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:44.584 fio-3.35 00:25:44.584 Starting 1 thread 00:25:44.585 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.113 00:25:47.113 test: (groupid=0, jobs=1): err= 0: pid=3149890: Tue Jul 16 00:52:04 2024 00:25:47.113 read: IOPS=3726, BW=14.6MiB/s (15.3MB/s)(29.3MiB/2015msec) 00:25:47.113 slat (nsec): min=1398, max=254522, avg=1697.48, stdev=4060.38 00:25:47.113 clat (usec): min=4995, max=30511, avg=18631.98, stdev=1876.54 00:25:47.113 lat (usec): min=5027, max=30512, avg=18633.68, stdev=1876.05 00:25:47.113 clat percentiles (usec): 00:25:47.113 | 1.00th=[14615], 5.00th=[16057], 10.00th=[16581], 20.00th=[17171], 00:25:47.113 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[19006], 00:25:47.113 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20841], 95.00th=[21365], 00:25:47.113 | 99.00th=[22414], 99.50th=[23725], 99.90th=[30278], 99.95th=[30540], 00:25:47.113 | 99.99th=[30540] 00:25:47.113 bw ( KiB/s): min=14416, max=15384, per=99.88%, avg=14888.00, stdev=400.27, samples=4 00:25:47.113 iops : min= 3604, max= 3846, avg=3722.00, stdev=100.07, samples=4 00:25:47.113 write: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(29.5MiB/2015msec); 0 zone resets 00:25:47.113 slat (nsec): min=1458, max=228788, avg=1772.89, stdev=2971.40 00:25:47.113 clat (usec): min=2450, max=30021, avg=15532.73, stdev=1515.60 00:25:47.113 lat (usec): min=2466, max=30022, avg=15534.51, stdev=1515.20 00:25:47.113 clat percentiles (usec): 00:25:47.113 | 1.00th=[12125], 5.00th=[13566], 10.00th=[13960], 20.00th=[14484], 00:25:47.113 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:25:47.113 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:25:47.113 | 99.00th=[18482], 99.50th=[21365], 99.90th=[26608], 99.95th=[28967], 00:25:47.113 | 99.99th=[30016] 00:25:47.113 bw ( KiB/s): min=14592, max=15176, per=99.92%, avg=14994.00, stdev=275.15, samples=4 00:25:47.113 iops : min= 3648, max= 3794, avg=3748.50, stdev=68.79, samples=4 00:25:47.113 lat (msec) : 4=0.06%, 10=0.27%, 20=88.02%, 50=11.65% 00:25:47.113 cpu : usr=67.92%, sys=30.98%, ctx=70, majf=0, minf=5 00:25:47.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:47.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:47.113 issued rwts: total=7509,7559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:47.113 00:25:47.113 Run status group 0 (all jobs): 00:25:47.113 READ: bw=14.6MiB/s (15.3MB/s), 14.6MiB/s-14.6MiB/s (15.3MB/s-15.3MB/s), io=29.3MiB (30.8MB), run=2015-2015msec 00:25:47.113 WRITE: bw=14.7MiB/s (15.4MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=29.5MiB (31.0MB), run=2015-2015msec 00:25:47.113 00:52:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:47.114 00:52:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.114 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:47.114 fio-3.35 00:25:47.114 Starting 1 thread 00:25:47.114 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.645 00:25:49.645 test: (groupid=0, jobs=1): err= 0: pid=3150544: Tue Jul 16 00:52:07 2024 00:25:49.645 read: IOPS=4661, BW=72.8MiB/s (76.4MB/s)(146MiB/2011msec) 00:25:49.645 slat (nsec): min=2308, max=74822, avg=2587.66, stdev=1263.98 00:25:49.645 clat (usec): min=3869, max=39124, avg=15717.46, stdev=5641.34 00:25:49.645 lat (usec): min=3871, max=39127, avg=15720.05, stdev=5641.36 00:25:49.645 clat percentiles (usec): 00:25:49.645 | 1.00th=[ 5669], 5.00th=[ 7373], 10.00th=[ 8291], 20.00th=[ 9503], 00:25:49.645 | 30.00th=[11207], 40.00th=[14877], 50.00th=[16712], 60.00th=[17695], 00:25:49.645 | 70.00th=[18744], 80.00th=[20317], 90.00th=[22676], 95.00th=[24511], 00:25:49.645 | 99.00th=[30016], 99.50th=[32113], 99.90th=[34866], 99.95th=[35914], 00:25:49.645 | 99.99th=[39060] 00:25:49.645 bw ( KiB/s): min=28640, max=60960, per=52.57%, avg=39208.00, stdev=14699.59, samples=4 00:25:49.645 iops : min= 1790, max= 3810, avg=2450.50, stdev=918.72, samples=4 00:25:49.645 write: IOPS=2778, BW=43.4MiB/s (45.5MB/s)(80.2MiB/1848msec); 0 zone resets 00:25:49.645 slat (usec): min=26, max=381, avg=29.03, stdev= 9.41 00:25:49.645 clat (usec): min=7562, max=49596, avg=20621.85, stdev=7337.28 00:25:49.645 lat (usec): min=7590, max=49624, avg=20650.89, stdev=7337.32 00:25:49.645 clat percentiles (usec): 00:25:49.645 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12256], 00:25:49.645 | 30.00th=[14222], 40.00th=[19530], 50.00th=[22414], 60.00th=[23987], 00:25:49.645 | 70.00th=[25822], 80.00th=[27395], 90.00th=[29230], 95.00th=[30802], 00:25:49.645 | 99.00th=[34341], 99.50th=[38536], 99.90th=[48497], 99.95th=[49021], 00:25:49.645 | 99.99th=[49546] 00:25:49.645 bw ( KiB/s): min=29504, max=61952, per=90.92%, avg=40416.00, stdev=14628.28, samples=4 00:25:49.645 iops : min= 1844, max= 3872, avg=2526.00, stdev=914.27, samples=4 00:25:49.645 lat (msec) : 4=0.01%, 10=16.65%, 20=48.19%, 50=35.14% 00:25:49.646 cpu : usr=73.63%, sys=25.22%, ctx=118, majf=0, minf=2 00:25:49.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:49.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:49.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:49.646 issued rwts: total=9374,5134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:49.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:49.646 00:25:49.646 Run status group 0 (all jobs): 00:25:49.646 READ: bw=72.8MiB/s (76.4MB/s), 72.8MiB/s-72.8MiB/s (76.4MB/s-76.4MB/s), io=146MiB (154MB), run=2011-2011msec 00:25:49.646 WRITE: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=80.2MiB (84.1MB), run=1848-1848msec 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.646 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.646 rmmod nvme_tcp 00:25:49.646 rmmod nvme_fabrics 00:25:49.905 rmmod nvme_keyring 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3149327 ']' 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3149327 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3149327 ']' 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3149327 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3149327 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3149327' 00:25:49.905 killing process with pid 3149327 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3149327 00:25:49.905 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3149327 00:25:50.164 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.165 00:52:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.071 00:52:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.071 00:25:52.071 real 0m15.872s 00:25:52.071 user 0m59.637s 00:25:52.071 sys 0m6.658s 00:25:52.071 00:52:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.071 00:52:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.071 ************************************ 00:25:52.071 END TEST nvmf_fio_host 00:25:52.071 ************************************ 00:25:52.330 00:52:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.330 00:52:09 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:52.330 00:52:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.330 00:52:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.330 00:52:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.330 ************************************ 00:25:52.330 START TEST nvmf_failover 00:25:52.330 ************************************ 00:25:52.330 00:52:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:52.330 * Looking for test storage... 00:25:52.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.330 00:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.331 00:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:58.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:58.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:58.901 Found net devices under 0000:af:00.0: cvl_0_0 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:58.901 Found net devices under 0000:af:00.1: cvl_0_1 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.901 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:58.902 00:25:58.902 --- 10.0.0.2 ping statistics --- 00:25:58.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.902 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:58.902 00:25:58.902 --- 10.0.0.1 ping statistics --- 00:25:58.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.902 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3154530 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3154530 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3154530 ']' 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.902 00:52:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.902 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.902 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.902 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.902 [2024-07-16 00:52:16.054919] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:25:58.902 [2024-07-16 00:52:16.054978] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.902 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.902 [2024-07-16 00:52:16.147696] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:58.902 [2024-07-16 00:52:16.247440] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.902 [2024-07-16 00:52:16.247492] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.902 [2024-07-16 00:52:16.247504] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.902 [2024-07-16 00:52:16.247516] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.902 [2024-07-16 00:52:16.247525] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.902 [2024-07-16 00:52:16.247590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.902 [2024-07-16 00:52:16.247703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.902 [2024-07-16 00:52:16.247705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.161 00:52:16 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:59.419 [2024-07-16 00:52:17.126688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.419 00:52:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:59.678 Malloc0 00:25:59.678 00:52:17 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.936 00:52:17 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.193 00:52:17 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.193 [2024-07-16 00:52:17.954662] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.193 00:52:17 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.452 [2024-07-16 00:52:18.123272] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.452 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.711 [2024-07-16 00:52:18.291925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3154901 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3154901 /var/tmp/bdevperf.sock 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3154901 ']' 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.711 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:00.970 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.970 00:52:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:00.970 00:52:18 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.537 NVMe0n1 00:26:01.537 00:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.797 00:26:01.797 00:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3155086 00:26:01.797 00:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.797 00:52:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:02.733 00:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.993 [2024-07-16 00:52:20.671148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.671987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.993 [2024-07-16 00:52:20.672191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672461] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.672985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 [2024-07-16 00:52:20.673354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1044560 is same with the state(5) to be set 00:26:02.994 00:52:20 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:06.285 00:52:23 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.285 00:26:06.545 00:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:06.545 [2024-07-16 00:52:24.296992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.545 [2024-07-16 00:52:24.297229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 [2024-07-16 00:52:24.297851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10454b0 is same with the state(5) to be set 00:26:06.546 00:52:24 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:09.835 00:52:27 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.835 [2024-07-16 00:52:27.576239] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.835 00:52:27 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:11.214 00:52:28 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:11.214 [2024-07-16 00:52:28.770743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.770988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.214 [2024-07-16 00:52:28.771083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 [2024-07-16 00:52:28.771496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046450 is same with the state(5) to be set 00:26:11.215 00:52:28 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3155086 00:26:17.784 0 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3154901 ']' 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3154901' 00:26:17.784 killing process with pid 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3154901 00:26:17.784 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:17.784 [2024-07-16 00:52:18.373175] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:26:17.784 [2024-07-16 00:52:18.373242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154901 ] 00:26:17.784 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.784 [2024-07-16 00:52:18.457170] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.784 [2024-07-16 00:52:18.545849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.784 Running I/O for 15 seconds... 00:26:17.784 [2024-07-16 00:52:20.673953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.784 [2024-07-16 00:52:20.674474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.784 [2024-07-16 00:52:20.674484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.674982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.674994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.785 [2024-07-16 00:52:20.675406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.785 [2024-07-16 00:52:20.675867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.785 [2024-07-16 00:52:20.675876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.675900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.675922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.675943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.675964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.675985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.675997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.786 [2024-07-16 00:52:20.676445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33064 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33072 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33080 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33088 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33096 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33104 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33112 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33120 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33128 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33136 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33144 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33152 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33160 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.676922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.676930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.676938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33168 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.676947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.687383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.687394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33176 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.687405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.786 [2024-07-16 00:52:20.687428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.786 [2024-07-16 00:52:20.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33184 len:8 PRP1 0x0 PRP2 0x0 00:26:17.786 [2024-07-16 00:52:20.687455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687507] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc04a80 was disconnected and freed. reset controller. 00:26:17.786 [2024-07-16 00:52:20.687521] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:17.786 [2024-07-16 00:52:20.687552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.786 [2024-07-16 00:52:20.687565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.786 [2024-07-16 00:52:20.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.786 [2024-07-16 00:52:20.687612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.786 [2024-07-16 00:52:20.687635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:20.687646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.786 [2024-07-16 00:52:20.687683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6c40 (9): Bad file descriptor 00:26:17.786 [2024-07-16 00:52:20.692618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.786 [2024-07-16 00:52:20.901634] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.786 [2024-07-16 00:52:24.299530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.786 [2024-07-16 00:52:24.299574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:24.299594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.786 [2024-07-16 00:52:24.299607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:24.299620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.786 [2024-07-16 00:52:24.299633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:24.299648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.786 [2024-07-16 00:52:24.299659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.786 [2024-07-16 00:52:24.299671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.299984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.299998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.787 [2024-07-16 00:52:24.300195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.787 [2024-07-16 00:52:24.300889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.787 [2024-07-16 00:52:24.300900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.300912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.300922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.300935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.300945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.300957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.300968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.300980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.301537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.301986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.301997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.788 [2024-07-16 00:52:24.302141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.788 [2024-07-16 00:52:24.302352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.788 [2024-07-16 00:52:24.302377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122296 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122304 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122312 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122320 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122328 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122336 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.789 [2024-07-16 00:52:24.302601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.789 [2024-07-16 00:52:24.302612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122344 len:8 PRP1 0x0 PRP2 0x0 00:26:17.789 [2024-07-16 00:52:24.302621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302668] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdb1b40 was disconnected and freed. reset controller. 00:26:17.789 [2024-07-16 00:52:24.302681] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:17.789 [2024-07-16 00:52:24.302706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.789 [2024-07-16 00:52:24.302717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.789 [2024-07-16 00:52:24.302739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.789 [2024-07-16 00:52:24.302760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.789 [2024-07-16 00:52:24.302781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:24.302791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.789 [2024-07-16 00:52:24.302818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6c40 (9): Bad file descriptor 00:26:17.789 [2024-07-16 00:52:24.307052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.789 [2024-07-16 00:52:24.352134] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.789 [2024-07-16 00:52:28.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.772888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.772908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.772932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.772943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.772955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.772965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.772977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.772987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.772999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.789 [2024-07-16 00:52:28.773505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.789 [2024-07-16 00:52:28.773901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.789 [2024-07-16 00:52:28.773913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.773922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.773934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.773943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.773955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.773965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.773977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.773987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.773999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.774977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.774988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.790 [2024-07-16 00:52:28.775100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.790 [2024-07-16 00:52:28.775133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.790 [2024-07-16 00:52:28.775142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.791 [2024-07-16 00:52:28.775303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.791 [2024-07-16 00:52:28.775407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.791 [2024-07-16 00:52:28.775428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.791 [2024-07-16 00:52:28.775449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.791 [2024-07-16 00:52:28.775469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe6c40 is same with the state(5) to be set 00:26:17.791 [2024-07-16 00:52:28.775749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90120 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90136 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90152 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.775971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.775979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.775987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.775998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90192 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90200 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90208 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90224 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90232 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90240 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90248 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90256 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89240 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89248 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.776493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.776501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.776509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89256 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.786962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.786984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.786994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89264 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89272 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89280 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89288 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89296 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89304 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89312 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89320 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.791 [2024-07-16 00:52:28.787385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.791 [2024-07-16 00:52:28.787395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.791 [2024-07-16 00:52:28.787407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89328 len:8 PRP1 0x0 PRP2 0x0 00:26:17.791 [2024-07-16 00:52:28.787421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89336 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89344 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89352 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89360 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89368 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89376 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89384 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89392 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89400 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89408 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.787956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89416 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.787970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.787983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.787996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89424 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89432 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89440 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89448 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89456 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89472 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89480 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89488 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89496 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89504 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89512 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89520 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89528 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89536 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89544 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89552 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89560 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89568 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89576 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.788952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.788962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.788974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.788987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.792 [2024-07-16 00:52:28.789278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:26:17.792 [2024-07-16 00:52:28.789290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.792 [2024-07-16 00:52:28.789304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.792 [2024-07-16 00:52:28.789315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.789952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.789964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.789977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.789991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89752 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89760 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89768 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89776 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89784 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89792 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89800 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89808 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89816 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89824 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89840 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89848 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89856 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89864 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89872 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89880 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89888 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89896 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.793 [2024-07-16 00:52:28.790960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.793 [2024-07-16 00:52:28.790971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.793 [2024-07-16 00:52:28.790983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89904 len:8 PRP1 0x0 PRP2 0x0 00:26:17.793 [2024-07-16 00:52:28.790995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.791009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.791019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.791030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89912 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.791043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.791057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89920 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89928 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89936 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89944 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89952 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.797939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89960 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.797955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.797977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.797990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89976 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89984 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89992 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90000 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90008 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90016 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90024 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90032 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89464 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90040 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90056 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.798945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90072 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.798962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.798980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.798993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.799008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.799025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.799043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.799057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.799072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90088 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.799089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.799107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.799121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.799135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90096 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.799152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.799174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.799189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.799203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90104 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.799220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.799239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.794 [2024-07-16 00:52:28.799253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.794 [2024-07-16 00:52:28.799276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:26:17.794 [2024-07-16 00:52:28.799294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.794 [2024-07-16 00:52:28.799364] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe2070 was disconnected and freed. reset controller. 00:26:17.794 [2024-07-16 00:52:28.799384] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:17.794 [2024-07-16 00:52:28.799402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.794 [2024-07-16 00:52:28.799464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe6c40 (9): Bad file descriptor 00:26:17.794 [2024-07-16 00:52:28.807513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.794 [2024-07-16 00:52:28.935505] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:17.794 00:26:17.794 Latency(us) 00:26:17.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.794 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:17.794 Verification LBA range: start 0x0 length 0x4000 00:26:17.794 NVMe0n1 : 15.06 4919.09 19.22 767.05 0.00 22414.25 640.47 50045.67 00:26:17.794 =================================================================================================================== 00:26:17.794 Total : 4919.09 19.22 767.05 0.00 22414.25 640.47 50045.67 00:26:17.794 Received shutdown signal, test time was about 15.000000 seconds 00:26:17.794 00:26:17.794 Latency(us) 00:26:17.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.794 =================================================================================================================== 00:26:17.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3157791 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3157791 /var/tmp/bdevperf.sock 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3157791 ']' 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.794 00:52:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:17.794 00:52:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.794 00:52:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:17.794 00:52:35 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:17.794 [2024-07-16 00:52:35.521395] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.794 00:52:35 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:18.053 [2024-07-16 00:52:35.778489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:18.053 00:52:35 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:18.356 NVMe0n1 00:26:18.357 00:52:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:18.950 00:26:18.950 00:52:36 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.209 00:26:19.209 00:52:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.209 00:52:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:19.467 00:52:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:19.725 00:52:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:23.017 00:52:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:23.017 00:52:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:23.017 00:52:40 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:23.017 00:52:40 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3158763 00:26:23.017 00:52:40 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3158763 00:26:24.394 0 00:26:24.394 00:52:41 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.394 [2024-07-16 00:52:35.020273] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:26:24.394 [2024-07-16 00:52:35.020340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157791 ] 00:26:24.394 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.394 [2024-07-16 00:52:35.102767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.394 [2024-07-16 00:52:35.184679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.394 [2024-07-16 00:52:37.374977] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:24.394 [2024-07-16 00:52:37.375031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.394 [2024-07-16 00:52:37.375047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.394 [2024-07-16 00:52:37.375059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.394 [2024-07-16 00:52:37.375070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.394 [2024-07-16 00:52:37.375083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.394 [2024-07-16 00:52:37.375094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.394 [2024-07-16 00:52:37.375106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.394 [2024-07-16 00:52:37.375116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.394 [2024-07-16 00:52:37.375126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.394 [2024-07-16 00:52:37.375161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.394 [2024-07-16 00:52:37.375180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e35c40 (9): Bad file descriptor 00:26:24.394 [2024-07-16 00:52:37.387848] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:24.394 Running I/O for 1 seconds... 00:26:24.394 00:26:24.394 Latency(us) 00:26:24.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.394 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.394 Verification LBA range: start 0x0 length 0x4000 00:26:24.394 NVMe0n1 : 1.02 3755.57 14.67 0.00 0.00 33931.24 5272.67 29431.62 00:26:24.394 =================================================================================================================== 00:26:24.394 Total : 3755.57 14.67 0.00 0.00 33931.24 5272.67 29431.62 00:26:24.394 00:52:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.394 00:52:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:24.394 00:52:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.653 00:52:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.653 00:52:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:24.912 00:52:42 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.171 00:52:42 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:28.461 00:52:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:28.461 00:52:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3157791 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3157791 ']' 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3157791 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3157791 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3157791' 00:26:28.461 killing process with pid 3157791 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3157791 00:26:28.461 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3157791 00:26:28.720 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:28.720 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.980 rmmod nvme_tcp 00:26:28.980 rmmod nvme_fabrics 00:26:28.980 rmmod nvme_keyring 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3154530 ']' 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3154530 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3154530 ']' 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3154530 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3154530 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3154530' 00:26:28.980 killing process with pid 3154530 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3154530 00:26:28.980 00:52:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3154530 00:26:29.546 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.547 00:52:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.449 00:52:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.449 00:26:31.449 real 0m39.247s 00:26:31.449 user 2m5.891s 00:26:31.449 sys 0m7.763s 00:26:31.449 00:52:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.449 00:52:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:31.449 ************************************ 00:26:31.449 END TEST nvmf_failover 00:26:31.449 ************************************ 00:26:31.449 00:52:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:31.449 00:52:49 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:31.449 00:52:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:31.449 00:52:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.449 00:52:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.449 ************************************ 00:26:31.449 START TEST nvmf_host_discovery 00:26:31.449 ************************************ 00:26:31.449 00:52:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:31.708 * Looking for test storage... 00:26:31.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:31.708 00:52:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:38.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:38.320 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:38.320 Found net devices under 0000:af:00.0: cvl_0_0 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:38.320 Found net devices under 0000:af:00.1: cvl_0_1 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.320 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.321 00:52:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:26:38.321 00:26:38.321 --- 10.0.0.2 ping statistics --- 00:26:38.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.321 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:26:38.321 00:26:38.321 --- 10.0.0.1 ping statistics --- 00:26:38.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.321 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3163532 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3163532 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3163532 ']' 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.321 00:52:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.321 [2024-07-16 00:52:55.353651] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:26:38.321 [2024-07-16 00:52:55.353708] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.321 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.321 [2024-07-16 00:52:55.442854] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.321 [2024-07-16 00:52:55.545590] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.321 [2024-07-16 00:52:55.545643] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.321 [2024-07-16 00:52:55.545656] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.321 [2024-07-16 00:52:55.545667] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.321 [2024-07-16 00:52:55.545676] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.321 [2024-07-16 00:52:55.545709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.580 [2024-07-16 00:52:56.333639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.580 [2024-07-16 00:52:56.345801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.580 null0 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:38.580 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.581 null1 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3163636 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3163636 /tmp/host.sock 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3163636 ']' 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:38.581 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.581 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.839 [2024-07-16 00:52:56.425457] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:26:38.839 [2024-07-16 00:52:56.425511] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163636 ] 00:26:38.839 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.839 [2024-07-16 00:52:56.508809] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.839 [2024-07-16 00:52:56.599334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.098 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.357 00:52:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 [2024-07-16 00:52:57.051770] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.357 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:39.616 00:52:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:40.181 [2024-07-16 00:52:57.765036] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:40.181 [2024-07-16 00:52:57.765060] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:40.181 [2024-07-16 00:52:57.765076] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.181 [2024-07-16 00:52:57.893528] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:40.439 [2024-07-16 00:52:58.116814] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:40.439 [2024-07-16 00:52:58.116839] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:40.700 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:40.701 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:40.960 00:52:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.896 [2024-07-16 00:52:59.679869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:41.896 [2024-07-16 00:52:59.680351] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:41.896 [2024-07-16 00:52:59.680380] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:41.896 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.154 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.154 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.154 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:42.154 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:42.155 [2024-07-16 00:52:59.806857] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:42.155 00:52:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:42.155 [2024-07-16 00:52:59.912777] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:42.155 [2024-07-16 00:52:59.912800] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.155 [2024-07-16 00:52:59.912807] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.090 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.349 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:43.349 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:43.349 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:43.349 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.349 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 [2024-07-16 00:53:00.964055] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:43.350 [2024-07-16 00:53:00.964084] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:43.350 [2024-07-16 00:53:00.971781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.350 [2024-07-16 00:53:00.971806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.350 [2024-07-16 00:53:00.971819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.350 [2024-07-16 00:53:00.971830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.350 [2024-07-16 00:53:00.971841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.350 [2024-07-16 00:53:00.971851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.350 [2024-07-16 00:53:00.971861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.350 [2024-07-16 00:53:00.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.350 [2024-07-16 00:53:00.971886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:43.350 [2024-07-16 00:53:00.981791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 00:53:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.350 [2024-07-16 00:53:00.991833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:00.992104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:00.992124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:00.992136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:00.992151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:00.992175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:00.992185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:00.992196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:00.992211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 [2024-07-16 00:53:01.001899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:01.002191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:01.002209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:01.002219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:01.002233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:01.002262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:01.002273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:01.002282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:01.002296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 [2024-07-16 00:53:01.011960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:01.012245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:01.012272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:01.012283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:01.012299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:01.012327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:01.012337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:01.012347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:01.012361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:43.350 [2024-07-16 00:53:01.022026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:01.022178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:01.022194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:01.022204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:01.022218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:01.022232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:01.022241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:01.022250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:01.022269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.350 [2024-07-16 00:53:01.032086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:01.032215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:01.032233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:01.032243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:01.032265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:01.032279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:01.032288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:01.032302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:01.032316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 [2024-07-16 00:53:01.042151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:43.350 [2024-07-16 00:53:01.042346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.350 [2024-07-16 00:53:01.042364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d6610 with addr=10.0.0.2, port=4420 00:26:43.350 [2024-07-16 00:53:01.042374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d6610 is same with the state(5) to be set 00:26:43.350 [2024-07-16 00:53:01.042389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d6610 (9): Bad file descriptor 00:26:43.350 [2024-07-16 00:53:01.042402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.350 [2024-07-16 00:53:01.042410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:43.350 [2024-07-16 00:53:01.042419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.350 [2024-07-16 00:53:01.042434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.350 [2024-07-16 00:53:01.050781] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:43.350 [2024-07-16 00:53:01.050803] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:43.350 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.351 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:43.610 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.611 00:53:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 [2024-07-16 00:53:02.407448] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:44.984 [2024-07-16 00:53:02.407470] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:44.984 [2024-07-16 00:53:02.407486] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:44.984 [2024-07-16 00:53:02.494781] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:44.984 [2024-07-16 00:53:02.561779] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:44.984 [2024-07-16 00:53:02.561815] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 request: 00:26:44.984 { 00:26:44.984 "name": "nvme", 00:26:44.984 "trtype": "tcp", 00:26:44.984 "traddr": "10.0.0.2", 00:26:44.984 "adrfam": "ipv4", 00:26:44.984 "trsvcid": "8009", 00:26:44.984 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:44.984 "wait_for_attach": true, 00:26:44.984 "method": "bdev_nvme_start_discovery", 00:26:44.984 "req_id": 1 00:26:44.984 } 00:26:44.984 Got JSON-RPC error response 00:26:44.984 response: 00:26:44.984 { 00:26:44.984 "code": -17, 00:26:44.984 "message": "File exists" 00:26:44.984 } 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 request: 00:26:44.984 { 00:26:44.984 "name": "nvme_second", 00:26:44.984 "trtype": "tcp", 00:26:44.984 "traddr": "10.0.0.2", 00:26:44.984 "adrfam": "ipv4", 00:26:44.984 "trsvcid": "8009", 00:26:44.984 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:44.984 "wait_for_attach": true, 00:26:44.984 "method": "bdev_nvme_start_discovery", 00:26:44.984 "req_id": 1 00:26:44.984 } 00:26:44.984 Got JSON-RPC error response 00:26:44.984 response: 00:26:44.984 { 00:26:44.984 "code": -17, 00:26:44.984 "message": "File exists" 00:26:44.984 } 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.984 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.242 00:53:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.176 [2024-07-16 00:53:03.857531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-07-16 00:53:03.857565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x910280 with addr=10.0.0.2, port=8010 00:26:46.176 [2024-07-16 00:53:03.857582] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:46.176 [2024-07-16 00:53:03.857592] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:46.176 [2024-07-16 00:53:03.857601] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:47.110 [2024-07-16 00:53:04.859969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.110 [2024-07-16 00:53:04.859999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x910280 with addr=10.0.0.2, port=8010 00:26:47.110 [2024-07-16 00:53:04.860014] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:47.110 [2024-07-16 00:53:04.860023] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:47.111 [2024-07-16 00:53:04.860031] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:48.045 [2024-07-16 00:53:05.862077] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:48.045 request: 00:26:48.045 { 00:26:48.045 "name": "nvme_second", 00:26:48.045 "trtype": "tcp", 00:26:48.045 "traddr": "10.0.0.2", 00:26:48.045 "adrfam": "ipv4", 00:26:48.045 "trsvcid": "8010", 00:26:48.045 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:48.045 "wait_for_attach": false, 00:26:48.046 "attach_timeout_ms": 3000, 00:26:48.046 "method": "bdev_nvme_start_discovery", 00:26:48.046 "req_id": 1 00:26:48.046 } 00:26:48.046 Got JSON-RPC error response 00:26:48.046 response: 00:26:48.046 { 00:26:48.046 "code": -110, 00:26:48.046 "message": "Connection timed out" 00:26:48.046 } 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:48.046 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3163636 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.304 rmmod nvme_tcp 00:26:48.304 rmmod nvme_fabrics 00:26:48.304 rmmod nvme_keyring 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3163532 ']' 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3163532 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3163532 ']' 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3163532 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:48.304 00:53:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3163532 00:26:48.304 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:48.304 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:48.304 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3163532' 00:26:48.304 killing process with pid 3163532 00:26:48.304 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3163532 00:26:48.304 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3163532 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.562 00:53:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.094 00:26:51.094 real 0m19.039s 00:26:51.094 user 0m23.967s 00:26:51.094 sys 0m5.943s 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.094 ************************************ 00:26:51.094 END TEST nvmf_host_discovery 00:26:51.094 ************************************ 00:26:51.094 00:53:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:51.094 00:53:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:51.094 00:53:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:51.094 00:53:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.094 00:53:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.094 ************************************ 00:26:51.094 START TEST nvmf_host_multipath_status 00:26:51.094 ************************************ 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:51.094 * Looking for test storage... 00:26:51.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.094 00:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.367 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:56.368 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:56.368 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:56.368 Found net devices under 0000:af:00.0: cvl_0_0 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:56.368 Found net devices under 0000:af:00.1: cvl_0_1 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.368 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:26:56.627 00:26:56.627 --- 10.0.0.2 ping statistics --- 00:26:56.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.627 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:26:56.627 00:26:56.627 --- 10.0.0.1 ping statistics --- 00:26:56.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.627 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3169046 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3169046 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3169046 ']' 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.627 00:53:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:56.886 [2024-07-16 00:53:14.472352] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:26:56.886 [2024-07-16 00:53:14.472407] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.886 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.886 [2024-07-16 00:53:14.560040] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:56.886 [2024-07-16 00:53:14.649357] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.886 [2024-07-16 00:53:14.649398] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.886 [2024-07-16 00:53:14.649409] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.886 [2024-07-16 00:53:14.649417] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.886 [2024-07-16 00:53:14.649426] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.886 [2024-07-16 00:53:14.653280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.886 [2024-07-16 00:53:14.653285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3169046 00:26:57.822 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:58.080 [2024-07-16 00:53:15.692721] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.080 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:58.338 Malloc0 00:26:58.338 00:53:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:58.595 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:58.853 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.853 [2024-07-16 00:53:16.672411] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.111 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:59.111 [2024-07-16 00:53:16.933197] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3169584 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3169584 /var/tmp/bdevperf.sock 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3169584 ']' 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:59.417 00:53:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:00.372 00:53:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:00.372 00:53:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:00.372 00:53:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:00.630 00:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:00.887 Nvme0n1 00:27:01.147 00:53:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:01.712 Nvme0n1 00:27:01.712 00:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:01.712 00:53:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:03.610 00:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:03.610 00:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:03.868 00:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.125 00:53:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:05.062 00:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:05.062 00:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.062 00:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.062 00:53:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.321 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.321 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:05.321 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.321 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.579 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.579 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.579 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.579 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.836 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.836 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.836 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.836 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.093 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.093 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:06.093 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.093 00:53:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.352 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.352 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.352 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.352 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.610 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.610 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:06.610 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.868 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:07.126 00:53:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:08.500 00:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:08.500 00:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:08.500 00:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.500 00:53:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.500 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.500 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.500 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.500 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.758 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.758 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.758 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.758 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.016 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.016 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.016 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.016 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.273 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.273 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.273 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.273 00:53:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.532 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.532 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.532 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.532 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.791 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.791 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:09.791 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.049 00:53:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:10.307 00:53:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:11.242 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:11.242 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.242 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.242 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.500 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.500 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.500 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.500 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.758 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.758 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.758 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.758 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.016 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.016 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.016 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.016 00:53:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.274 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.274 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.274 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.274 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:12.842 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.100 00:53:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:13.358 00:53:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.735 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:14.993 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.993 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:14.993 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.993 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.252 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.252 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.252 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.252 00:53:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.510 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.510 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.510 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.510 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.768 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.768 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:15.768 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.768 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.026 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.027 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:16.027 00:53:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:16.284 00:53:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:16.542 00:53:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:17.479 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:17.479 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:17.479 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.479 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.738 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.738 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.738 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.738 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.996 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.996 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.996 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.996 00:53:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.254 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.254 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.255 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.255 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.513 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.513 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:18.513 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.513 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.771 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.771 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:18.771 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.771 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.030 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:19.030 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:19.030 00:53:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:19.289 00:53:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.548 00:53:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:20.925 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.926 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:21.184 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.184 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:21.184 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.184 00:53:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.444 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.444 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.444 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.444 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.703 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.703 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:21.703 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.703 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.961 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.961 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.961 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.961 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:22.220 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.220 00:53:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:22.478 00:53:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:22.478 00:53:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:22.736 00:53:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:22.993 00:53:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:23.928 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:23.928 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:23.928 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.928 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:24.187 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.187 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:24.187 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.187 00:53:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:24.445 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.445 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:24.445 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.445 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:24.704 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.704 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:24.704 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.704 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:24.963 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.963 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:24.963 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.963 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:25.222 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.222 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:25.222 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.222 00:53:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.481 00:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.481 00:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:25.481 00:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:25.739 00:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:25.997 00:53:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:26.933 00:53:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:26.933 00:53:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:26.933 00:53:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.933 00:53:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.192 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:27.192 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:27.192 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.192 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.451 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.451 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.451 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.451 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.710 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.710 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.710 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.710 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.278 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.278 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.278 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.278 00:53:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.278 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.278 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:28.278 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.278 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.537 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.537 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:28.537 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:28.796 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:29.054 00:53:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:30.051 00:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:30.051 00:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:30.051 00:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.051 00:53:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.309 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.309 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:30.309 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.309 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.568 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.568 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.568 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.568 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:30.827 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.827 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:30.827 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.827 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.086 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.086 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:31.086 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.086 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.345 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.345 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:31.345 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.345 00:53:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.605 00:53:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.605 00:53:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:31.605 00:53:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:31.605 00:53:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:31.864 00:53:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.240 00:53:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:33.498 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:33.498 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.498 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.498 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.756 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.756 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.756 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.756 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:34.015 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.015 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:34.015 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.015 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:34.275 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.275 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:34.275 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.275 00:53:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3169584 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3169584 ']' 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3169584 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3169584 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3169584' 00:27:34.534 killing process with pid 3169584 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3169584 00:27:34.534 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3169584 00:27:34.793 Connection closed with partial response: 00:27:34.793 00:27:34.793 00:27:34.793 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3169584 00:27:34.793 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.070 [2024-07-16 00:53:17.007280] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:27:35.070 [2024-07-16 00:53:17.007346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169584 ] 00:27:35.070 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.070 [2024-07-16 00:53:17.123053] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.070 [2024-07-16 00:53:17.268032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.070 Running I/O for 90 seconds... 00:27:35.070 [2024-07-16 00:53:34.002601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.070 [2024-07-16 00:53:34.002678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.002734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.002758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.002800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.002822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.002863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.002885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.002925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.002947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.002987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.003009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.003050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.003071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.003111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.003133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.003173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.070 [2024-07-16 00:53:34.003194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.070 [2024-07-16 00:53:34.003234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.003953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.003993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.004943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.004983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.005599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.005625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.007499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.007540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.007586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.071 [2024-07-16 00:53:34.007608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.071 [2024-07-16 00:53:34.007648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.007772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.007832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.007893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.007955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.007976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.072 [2024-07-16 00:53:34.008360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.008957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.008978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.009964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.009985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.010025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.010047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.010086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.010107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.010147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.010208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.072 [2024-07-16 00:53:34.010229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.072 [2024-07-16 00:53:34.010276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.010358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.010952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.010973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.011817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.011839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.013672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.013709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.013754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.013776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.013817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.013845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.013906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.013967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.073 [2024-07-16 00:53:34.014344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.073 [2024-07-16 00:53:34.014636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.073 [2024-07-16 00:53:34.014658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.014699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.014720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.014760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.014822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.014883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.014905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.014945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.014966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.015943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.074 [2024-07-16 00:53:34.016724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.074 [2024-07-16 00:53:34.016765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.016787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.016828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.016849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.016915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.016976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.017016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.017038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.017078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.017099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.017139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.017160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.017200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.017222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.018814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.018850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.018895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.018917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.018957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.018978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.075 [2024-07-16 00:53:34.019736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.019799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.019861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.019924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.019964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.019986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.075 [2024-07-16 00:53:34.020924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.075 [2024-07-16 00:53:34.020945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.020985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.021768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.021950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.021990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.076 [2024-07-16 00:53:34.022325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.022977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.022999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.023038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.023059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.023103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.023165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.023186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.025042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.025108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.025169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.076 [2024-07-16 00:53:34.025305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.076 [2024-07-16 00:53:34.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.077 [2024-07-16 00:53:34.025744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.025965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.025986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.026948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.026969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.077 [2024-07-16 00:53:34.027892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.077 [2024-07-16 00:53:34.027935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.027958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.027997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.028496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.028517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.030946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.030967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.031032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.078 [2024-07-16 00:53:34.031094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.078 [2024-07-16 00:53:34.031638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.078 [2024-07-16 00:53:34.031660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.031700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.031721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.031762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.031783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.031828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.031849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.031889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.031911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.031951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.031973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.032952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.032975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.033015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.033036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.033076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.079 [2024-07-16 00:53:34.033099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.033139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.079 [2024-07-16 00:53:34.033162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.079 [2024-07-16 00:53:34.033201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.080 [2024-07-16 00:53:34.033670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.033730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.033791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.033852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.033912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.033952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.033973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.034452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.034473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.080 [2024-07-16 00:53:34.036935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.080 [2024-07-16 00:53:34.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.081 [2024-07-16 00:53:34.037058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.037951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.037991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.038943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.081 [2024-07-16 00:53:34.038970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.081 [2024-07-16 00:53:34.039010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.039839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.039861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.041939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.041979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.082 [2024-07-16 00:53:34.042529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.082 [2024-07-16 00:53:34.042593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.082 [2024-07-16 00:53:34.042634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.082 [2024-07-16 00:53:34.042657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.042697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.042720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.042760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.042781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.042821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.042843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.042883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.042910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.042951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.042974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.043976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.043999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.083 [2024-07-16 00:53:34.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.083 [2024-07-16 00:53:34.044682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.083 [2024-07-16 00:53:34.044705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.044744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.044808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.044831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.044870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.044893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.044932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.044955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.044994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.045017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.045079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.045141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.045887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.045910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.047695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.047732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.047782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.047806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.047845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.047869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.047909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.047931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.047970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.047993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.084 [2024-07-16 00:53:34.048573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.084 [2024-07-16 00:53:34.048738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.084 [2024-07-16 00:53:34.048760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.048799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.048822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.048885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.048925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.048947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.048987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.049939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.049961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.050960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.050983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.051023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.051046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.051087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.051111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.051151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.051174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.051236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.051285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.051309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.052904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.052942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.052987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.053010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.053072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.053133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.053173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.085 [2024-07-16 00:53:34.053194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.085 [2024-07-16 00:53:34.053234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.053960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.053999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.054025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.054954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.054975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.055973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.086 [2024-07-16 00:53:34.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.056036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.086 [2024-07-16 00:53:34.056057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.086 [2024-07-16 00:53:34.056096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.056633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.056695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.056756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.056818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.056880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.056942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.056982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.057902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.057935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.058962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.058984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.087 [2024-07-16 00:53:34.059061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.087 [2024-07-16 00:53:34.059896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.087 [2024-07-16 00:53:34.059918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.059973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.059996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.060931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.060985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.061944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.061998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.062074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.062229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.062316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:34.062732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:34.062761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.654703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.654778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.654835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.654859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.654924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.654964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.654985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.655026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.655090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.655112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.088 [2024-07-16 00:53:49.655151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.088 [2024-07-16 00:53:49.655173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.655689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.655973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.655994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.656625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.656970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.656992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.089 [2024-07-16 00:53:49.657678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.089 [2024-07-16 00:53:49.657965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.089 [2024-07-16 00:53:49.657986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.658026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.658047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.658087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.658168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.658210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.658231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.662689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.662788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.662810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.662851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.662874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.662914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.662936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.662976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.662998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.663701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.663803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.663831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.665785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.665846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.665909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.665950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.665971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.090 [2024-07-16 00:53:49.666672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.090 [2024-07-16 00:53:49.666857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.090 [2024-07-16 00:53:49.666897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.666919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.666966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.666989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.667627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.667735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.667756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.671542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.671604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.671852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.671954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.671975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.672315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.672566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.672750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.672817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.672941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.673001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.673041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.673063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.673103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.673124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.673165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.091 [2024-07-16 00:53:49.673186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.091 [2024-07-16 00:53:49.673227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.091 [2024-07-16 00:53:49.673248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.673297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.673319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.673358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.673380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.673421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.673443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.679929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.679974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.679997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.680884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.680947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.680988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.092 [2024-07-16 00:53:49.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.681050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.681071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.681113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.681134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.681174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.681196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.681238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.092 [2024-07-16 00:53:49.681270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.092 [2024-07-16 00:53:49.681313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.681335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.681375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.681397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.681436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.681458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.681497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.681563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.681584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.681626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.681647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.685341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.685660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.685953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.685975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.686484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.686546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.686608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.686920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.686982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.687106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.687621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.687661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.687683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.690148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.093 [2024-07-16 00:53:49.690189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.093 [2024-07-16 00:53:49.690233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.093 [2024-07-16 00:53:49.690267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.690331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.690392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.690453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.690516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.690577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.690639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.690680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.690709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.693461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.693523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.693648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.693709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.693967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.694918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.694959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.694981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.695022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.695044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.695085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.695106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.695146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.695168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.695208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.695232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.695281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-07-16 00:53:49.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.700242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.700301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.700348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.700370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.700442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.094 [2024-07-16 00:53:49.700482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.094 [2024-07-16 00:53:49.700504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.700565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.700628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.700690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.700752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.700813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.700876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.700939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.700980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.701766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.701961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.702087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.702388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.702411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.705595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.705964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.705986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.706048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.706110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.706170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.706233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.095 [2024-07-16 00:53:49.706308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.706369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.706431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-07-16 00:53:49.706492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.095 [2024-07-16 00:53:49.706533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.706555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.706616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.706684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.706747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.706809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.706870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.706932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.706972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.706994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.707118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.707315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.707551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.707574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.710132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.096 [2024-07-16 00:53:49.710672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.710742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.710806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.096 [2024-07-16 00:53:49.710847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-07-16 00:53:49.710869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.096 Received shutdown signal, test time was about 32.862038 seconds 00:27:35.096 00:27:35.096 Latency(us) 00:27:35.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.096 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:35.096 Verification LBA range: start 0x0 length 0x4000 00:27:35.096 Nvme0n1 : 32.86 4652.06 18.17 0.00 0.00 27439.00 3530.01 4087539.90 00:27:35.096 =================================================================================================================== 00:27:35.096 Total : 4652.06 18.17 0.00 0.00 27439.00 3530.01 4087539.90 00:27:35.096 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.356 rmmod nvme_tcp 00:27:35.356 rmmod nvme_fabrics 00:27:35.356 rmmod nvme_keyring 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3169046 ']' 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3169046 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3169046 ']' 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3169046 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:35.356 00:53:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3169046 00:27:35.356 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:35.356 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:35.356 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3169046' 00:27:35.356 killing process with pid 3169046 00:27:35.356 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3169046 00:27:35.356 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3169046 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.625 00:53:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.534 00:53:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.534 00:27:37.534 real 0m46.905s 00:27:37.534 user 2m13.395s 00:27:37.534 sys 0m11.306s 00:27:37.534 00:53:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.534 00:53:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:37.534 ************************************ 00:27:37.534 END TEST nvmf_host_multipath_status 00:27:37.534 ************************************ 00:27:37.534 00:53:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:37.534 00:53:55 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:37.534 00:53:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.534 00:53:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.534 00:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.534 ************************************ 00:27:37.534 START TEST nvmf_discovery_remove_ifc 00:27:37.534 ************************************ 00:27:37.534 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:37.794 * Looking for test storage... 00:27:37.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.794 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.795 00:53:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.364 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.364 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:44.364 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:44.364 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:44.364 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:44.365 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:44.365 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.365 00:54:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:44.365 Found net devices under 0000:af:00.0: cvl_0_0 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:44.365 Found net devices under 0000:af:00.1: cvl_0_1 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:44.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:44.365 00:27:44.365 --- 10.0.0.2 ping statistics --- 00:27:44.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.365 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:44.365 00:27:44.365 --- 10.0.0.1 ping statistics --- 00:27:44.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.365 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3179605 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3179605 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3179605 ']' 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.365 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.366 00:54:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.366 [2024-07-16 00:54:01.376934] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:27:44.366 [2024-07-16 00:54:01.376989] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.366 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.366 [2024-07-16 00:54:01.465172] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.366 [2024-07-16 00:54:01.568968] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.366 [2024-07-16 00:54:01.569014] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.366 [2024-07-16 00:54:01.569027] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.366 [2024-07-16 00:54:01.569042] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.366 [2024-07-16 00:54:01.569052] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.366 [2024-07-16 00:54:01.569077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.625 [2024-07-16 00:54:02.366677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.625 [2024-07-16 00:54:02.374825] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:44.625 null0 00:27:44.625 [2024-07-16 00:54:02.406838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3179944 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3179944 /tmp/host.sock 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3179944 ']' 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:44.625 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.625 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.884 [2024-07-16 00:54:02.481359] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:27:44.884 [2024-07-16 00:54:02.481416] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179944 ] 00:27:44.884 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.884 [2024-07-16 00:54:02.563940] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.884 [2024-07-16 00:54:02.653877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.884 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.142 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.142 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:45.142 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.142 00:54:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.071 [2024-07-16 00:54:03.825443] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:46.071 [2024-07-16 00:54:03.825467] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:46.071 [2024-07-16 00:54:03.825487] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:46.329 [2024-07-16 00:54:03.911788] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:46.329 [2024-07-16 00:54:04.136172] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:46.329 [2024-07-16 00:54:04.136226] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:46.329 [2024-07-16 00:54:04.136261] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:46.329 [2024-07-16 00:54:04.136282] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:46.329 [2024-07-16 00:54:04.136306] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.329 [2024-07-16 00:54:04.143293] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1de15b0 was disconnected and freed. delete nvme_qpair. 00:27:46.329 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:46.587 00:54:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.521 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:47.779 00:54:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:48.714 00:54:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:49.664 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.923 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:49.923 00:54:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:50.858 00:54:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.794 [2024-07-16 00:54:09.577131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:51.794 [2024-07-16 00:54:09.577182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.794 [2024-07-16 00:54:09.577199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.794 [2024-07-16 00:54:09.577212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.794 [2024-07-16 00:54:09.577223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.794 [2024-07-16 00:54:09.577234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.794 [2024-07-16 00:54:09.577244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.794 [2024-07-16 00:54:09.577260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.794 [2024-07-16 00:54:09.577270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.794 [2024-07-16 00:54:09.577282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.794 [2024-07-16 00:54:09.577292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.794 [2024-07-16 00:54:09.577303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da7dc0 is same with the state(5) to be set 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.794 [2024-07-16 00:54:09.587152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da7dc0 (9): Bad file descriptor 00:27:51.794 [2024-07-16 00:54:09.597197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:51.794 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.053 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.053 00:54:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.988 [2024-07-16 00:54:10.626358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:52.988 [2024-07-16 00:54:10.626479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1da7dc0 with addr=10.0.0.2, port=4420 00:27:52.988 [2024-07-16 00:54:10.626512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da7dc0 is same with the state(5) to be set 00:27:52.988 [2024-07-16 00:54:10.626571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da7dc0 (9): Bad file descriptor 00:27:52.988 [2024-07-16 00:54:10.626680] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:52.988 [2024-07-16 00:54:10.626721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:52.988 [2024-07-16 00:54:10.626742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:52.988 [2024-07-16 00:54:10.626767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:52.988 [2024-07-16 00:54:10.626810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:52.988 [2024-07-16 00:54:10.626833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.988 00:54:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.924 [2024-07-16 00:54:11.629332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.924 [2024-07-16 00:54:11.629359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.924 [2024-07-16 00:54:11.629370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.924 [2024-07-16 00:54:11.629381] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:53.924 [2024-07-16 00:54:11.629397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.924 [2024-07-16 00:54:11.629421] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:53.924 [2024-07-16 00:54:11.629447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.924 [2024-07-16 00:54:11.629461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.924 [2024-07-16 00:54:11.629474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.924 [2024-07-16 00:54:11.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.924 [2024-07-16 00:54:11.629495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.924 [2024-07-16 00:54:11.629505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.924 [2024-07-16 00:54:11.629515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.924 [2024-07-16 00:54:11.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.924 [2024-07-16 00:54:11.629543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.924 [2024-07-16 00:54:11.629553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.924 [2024-07-16 00:54:11.629563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:53.924 [2024-07-16 00:54:11.629635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da7180 (9): Bad file descriptor 00:27:53.924 [2024-07-16 00:54:11.630655] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:53.924 [2024-07-16 00:54:11.630670] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.924 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:54.183 00:54:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:55.120 00:54:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.054 [2024-07-16 00:54:13.640216] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:56.054 [2024-07-16 00:54:13.640237] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:56.054 [2024-07-16 00:54:13.640258] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:56.055 [2024-07-16 00:54:13.767704] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.313 [2024-07-16 00:54:13.992126] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:56.313 [2024-07-16 00:54:13.992169] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:56.313 [2024-07-16 00:54:13.992195] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:56.313 [2024-07-16 00:54:13.992211] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:56.313 [2024-07-16 00:54:13.992221] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:56.313 00:54:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:56.313 [2024-07-16 00:54:13.999645] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1daeb10 was disconnected and freed. delete nvme_qpair. 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3179944 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3179944 ']' 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3179944 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.250 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3179944 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3179944' 00:27:57.509 killing process with pid 3179944 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3179944 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3179944 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.509 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.509 rmmod nvme_tcp 00:27:57.509 rmmod nvme_fabrics 00:27:57.509 rmmod nvme_keyring 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3179605 ']' 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3179605 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3179605 ']' 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3179605 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.768 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3179605 00:27:57.769 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:57.769 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:57.769 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3179605' 00:27:57.769 killing process with pid 3179605 00:27:57.769 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3179605 00:27:57.769 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3179605 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.028 00:54:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.933 00:54:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.192 00:28:00.192 real 0m22.406s 00:28:00.192 user 0m28.355s 00:28:00.192 sys 0m5.775s 00:28:00.192 00:54:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.192 00:54:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.192 ************************************ 00:28:00.192 END TEST nvmf_discovery_remove_ifc 00:28:00.192 ************************************ 00:28:00.192 00:54:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:00.192 00:54:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:00.192 00:54:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:00.192 00:54:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.192 00:54:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.192 ************************************ 00:28:00.192 START TEST nvmf_identify_kernel_target 00:28:00.192 ************************************ 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:00.192 * Looking for test storage... 00:28:00.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.192 00:54:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:06.777 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.777 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:06.778 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:06.778 Found net devices under 0000:af:00.0: cvl_0_0 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:06.778 Found net devices under 0000:af:00.1: cvl_0_1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:06.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:28:06.778 00:28:06.778 --- 10.0.0.2 ping statistics --- 00:28:06.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.778 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:06.778 00:28:06.778 --- 10.0.0.1 ping statistics --- 00:28:06.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.778 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:06.778 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:06.779 00:54:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:09.371 Waiting for block devices as requested 00:28:09.371 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:09.371 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:09.371 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:09.371 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:09.371 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:09.371 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:09.371 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:09.630 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:09.630 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:09.630 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:09.889 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:09.889 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:09.889 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:09.889 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:10.148 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:10.148 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:10.148 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:10.407 No valid GPT data, bailing 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:10.407 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:10.408 00:28:10.408 Discovery Log Number of Records 2, Generation counter 2 00:28:10.408 =====Discovery Log Entry 0====== 00:28:10.408 trtype: tcp 00:28:10.408 adrfam: ipv4 00:28:10.408 subtype: current discovery subsystem 00:28:10.408 treq: not specified, sq flow control disable supported 00:28:10.408 portid: 1 00:28:10.408 trsvcid: 4420 00:28:10.408 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:10.408 traddr: 10.0.0.1 00:28:10.408 eflags: none 00:28:10.408 sectype: none 00:28:10.408 =====Discovery Log Entry 1====== 00:28:10.408 trtype: tcp 00:28:10.408 adrfam: ipv4 00:28:10.408 subtype: nvme subsystem 00:28:10.408 treq: not specified, sq flow control disable supported 00:28:10.408 portid: 1 00:28:10.408 trsvcid: 4420 00:28:10.408 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:10.408 traddr: 10.0.0.1 00:28:10.408 eflags: none 00:28:10.408 sectype: none 00:28:10.408 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:10.408 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:10.408 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.668 ===================================================== 00:28:10.668 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:10.668 ===================================================== 00:28:10.668 Controller Capabilities/Features 00:28:10.668 ================================ 00:28:10.668 Vendor ID: 0000 00:28:10.668 Subsystem Vendor ID: 0000 00:28:10.668 Serial Number: d37591861d7857c13101 00:28:10.668 Model Number: Linux 00:28:10.668 Firmware Version: 6.7.0-68 00:28:10.668 Recommended Arb Burst: 0 00:28:10.668 IEEE OUI Identifier: 00 00 00 00:28:10.668 Multi-path I/O 00:28:10.668 May have multiple subsystem ports: No 00:28:10.668 May have multiple controllers: No 00:28:10.668 Associated with SR-IOV VF: No 00:28:10.668 Max Data Transfer Size: Unlimited 00:28:10.668 Max Number of Namespaces: 0 00:28:10.668 Max Number of I/O Queues: 1024 00:28:10.668 NVMe Specification Version (VS): 1.3 00:28:10.668 NVMe Specification Version (Identify): 1.3 00:28:10.668 Maximum Queue Entries: 1024 00:28:10.668 Contiguous Queues Required: No 00:28:10.668 Arbitration Mechanisms Supported 00:28:10.668 Weighted Round Robin: Not Supported 00:28:10.668 Vendor Specific: Not Supported 00:28:10.668 Reset Timeout: 7500 ms 00:28:10.668 Doorbell Stride: 4 bytes 00:28:10.668 NVM Subsystem Reset: Not Supported 00:28:10.668 Command Sets Supported 00:28:10.668 NVM Command Set: Supported 00:28:10.668 Boot Partition: Not Supported 00:28:10.668 Memory Page Size Minimum: 4096 bytes 00:28:10.668 Memory Page Size Maximum: 4096 bytes 00:28:10.668 Persistent Memory Region: Not Supported 00:28:10.668 Optional Asynchronous Events Supported 00:28:10.668 Namespace Attribute Notices: Not Supported 00:28:10.668 Firmware Activation Notices: Not Supported 00:28:10.668 ANA Change Notices: Not Supported 00:28:10.668 PLE Aggregate Log Change Notices: Not Supported 00:28:10.668 LBA Status Info Alert Notices: Not Supported 00:28:10.668 EGE Aggregate Log Change Notices: Not Supported 00:28:10.668 Normal NVM Subsystem Shutdown event: Not Supported 00:28:10.668 Zone Descriptor Change Notices: Not Supported 00:28:10.668 Discovery Log Change Notices: Supported 00:28:10.668 Controller Attributes 00:28:10.668 128-bit Host Identifier: Not Supported 00:28:10.668 Non-Operational Permissive Mode: Not Supported 00:28:10.668 NVM Sets: Not Supported 00:28:10.668 Read Recovery Levels: Not Supported 00:28:10.668 Endurance Groups: Not Supported 00:28:10.668 Predictable Latency Mode: Not Supported 00:28:10.668 Traffic Based Keep ALive: Not Supported 00:28:10.668 Namespace Granularity: Not Supported 00:28:10.668 SQ Associations: Not Supported 00:28:10.668 UUID List: Not Supported 00:28:10.668 Multi-Domain Subsystem: Not Supported 00:28:10.668 Fixed Capacity Management: Not Supported 00:28:10.668 Variable Capacity Management: Not Supported 00:28:10.668 Delete Endurance Group: Not Supported 00:28:10.668 Delete NVM Set: Not Supported 00:28:10.668 Extended LBA Formats Supported: Not Supported 00:28:10.668 Flexible Data Placement Supported: Not Supported 00:28:10.668 00:28:10.668 Controller Memory Buffer Support 00:28:10.668 ================================ 00:28:10.668 Supported: No 00:28:10.668 00:28:10.668 Persistent Memory Region Support 00:28:10.668 ================================ 00:28:10.668 Supported: No 00:28:10.668 00:28:10.668 Admin Command Set Attributes 00:28:10.668 ============================ 00:28:10.668 Security Send/Receive: Not Supported 00:28:10.668 Format NVM: Not Supported 00:28:10.668 Firmware Activate/Download: Not Supported 00:28:10.668 Namespace Management: Not Supported 00:28:10.668 Device Self-Test: Not Supported 00:28:10.668 Directives: Not Supported 00:28:10.668 NVMe-MI: Not Supported 00:28:10.668 Virtualization Management: Not Supported 00:28:10.668 Doorbell Buffer Config: Not Supported 00:28:10.668 Get LBA Status Capability: Not Supported 00:28:10.668 Command & Feature Lockdown Capability: Not Supported 00:28:10.668 Abort Command Limit: 1 00:28:10.668 Async Event Request Limit: 1 00:28:10.668 Number of Firmware Slots: N/A 00:28:10.668 Firmware Slot 1 Read-Only: N/A 00:28:10.668 Firmware Activation Without Reset: N/A 00:28:10.668 Multiple Update Detection Support: N/A 00:28:10.668 Firmware Update Granularity: No Information Provided 00:28:10.668 Per-Namespace SMART Log: No 00:28:10.668 Asymmetric Namespace Access Log Page: Not Supported 00:28:10.668 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:10.668 Command Effects Log Page: Not Supported 00:28:10.668 Get Log Page Extended Data: Supported 00:28:10.668 Telemetry Log Pages: Not Supported 00:28:10.668 Persistent Event Log Pages: Not Supported 00:28:10.668 Supported Log Pages Log Page: May Support 00:28:10.668 Commands Supported & Effects Log Page: Not Supported 00:28:10.668 Feature Identifiers & Effects Log Page:May Support 00:28:10.668 NVMe-MI Commands & Effects Log Page: May Support 00:28:10.668 Data Area 4 for Telemetry Log: Not Supported 00:28:10.668 Error Log Page Entries Supported: 1 00:28:10.668 Keep Alive: Not Supported 00:28:10.668 00:28:10.668 NVM Command Set Attributes 00:28:10.668 ========================== 00:28:10.668 Submission Queue Entry Size 00:28:10.668 Max: 1 00:28:10.668 Min: 1 00:28:10.668 Completion Queue Entry Size 00:28:10.668 Max: 1 00:28:10.668 Min: 1 00:28:10.668 Number of Namespaces: 0 00:28:10.668 Compare Command: Not Supported 00:28:10.668 Write Uncorrectable Command: Not Supported 00:28:10.668 Dataset Management Command: Not Supported 00:28:10.668 Write Zeroes Command: Not Supported 00:28:10.668 Set Features Save Field: Not Supported 00:28:10.668 Reservations: Not Supported 00:28:10.668 Timestamp: Not Supported 00:28:10.668 Copy: Not Supported 00:28:10.668 Volatile Write Cache: Not Present 00:28:10.668 Atomic Write Unit (Normal): 1 00:28:10.668 Atomic Write Unit (PFail): 1 00:28:10.668 Atomic Compare & Write Unit: 1 00:28:10.668 Fused Compare & Write: Not Supported 00:28:10.668 Scatter-Gather List 00:28:10.668 SGL Command Set: Supported 00:28:10.668 SGL Keyed: Not Supported 00:28:10.668 SGL Bit Bucket Descriptor: Not Supported 00:28:10.668 SGL Metadata Pointer: Not Supported 00:28:10.668 Oversized SGL: Not Supported 00:28:10.668 SGL Metadata Address: Not Supported 00:28:10.668 SGL Offset: Supported 00:28:10.668 Transport SGL Data Block: Not Supported 00:28:10.668 Replay Protected Memory Block: Not Supported 00:28:10.668 00:28:10.668 Firmware Slot Information 00:28:10.668 ========================= 00:28:10.668 Active slot: 0 00:28:10.668 00:28:10.668 00:28:10.668 Error Log 00:28:10.668 ========= 00:28:10.668 00:28:10.668 Active Namespaces 00:28:10.668 ================= 00:28:10.669 Discovery Log Page 00:28:10.669 ================== 00:28:10.669 Generation Counter: 2 00:28:10.669 Number of Records: 2 00:28:10.669 Record Format: 0 00:28:10.669 00:28:10.669 Discovery Log Entry 0 00:28:10.669 ---------------------- 00:28:10.669 Transport Type: 3 (TCP) 00:28:10.669 Address Family: 1 (IPv4) 00:28:10.669 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:10.669 Entry Flags: 00:28:10.669 Duplicate Returned Information: 0 00:28:10.669 Explicit Persistent Connection Support for Discovery: 0 00:28:10.669 Transport Requirements: 00:28:10.669 Secure Channel: Not Specified 00:28:10.669 Port ID: 1 (0x0001) 00:28:10.669 Controller ID: 65535 (0xffff) 00:28:10.669 Admin Max SQ Size: 32 00:28:10.669 Transport Service Identifier: 4420 00:28:10.669 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:10.669 Transport Address: 10.0.0.1 00:28:10.669 Discovery Log Entry 1 00:28:10.669 ---------------------- 00:28:10.669 Transport Type: 3 (TCP) 00:28:10.669 Address Family: 1 (IPv4) 00:28:10.669 Subsystem Type: 2 (NVM Subsystem) 00:28:10.669 Entry Flags: 00:28:10.669 Duplicate Returned Information: 0 00:28:10.669 Explicit Persistent Connection Support for Discovery: 0 00:28:10.669 Transport Requirements: 00:28:10.669 Secure Channel: Not Specified 00:28:10.669 Port ID: 1 (0x0001) 00:28:10.669 Controller ID: 65535 (0xffff) 00:28:10.669 Admin Max SQ Size: 32 00:28:10.669 Transport Service Identifier: 4420 00:28:10.669 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:10.669 Transport Address: 10.0.0.1 00:28:10.669 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:10.669 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.669 get_feature(0x01) failed 00:28:10.669 get_feature(0x02) failed 00:28:10.669 get_feature(0x04) failed 00:28:10.669 ===================================================== 00:28:10.669 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:10.669 ===================================================== 00:28:10.669 Controller Capabilities/Features 00:28:10.669 ================================ 00:28:10.669 Vendor ID: 0000 00:28:10.669 Subsystem Vendor ID: 0000 00:28:10.669 Serial Number: 90381883eb9eb5c256d3 00:28:10.669 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:10.669 Firmware Version: 6.7.0-68 00:28:10.669 Recommended Arb Burst: 6 00:28:10.669 IEEE OUI Identifier: 00 00 00 00:28:10.669 Multi-path I/O 00:28:10.669 May have multiple subsystem ports: Yes 00:28:10.669 May have multiple controllers: Yes 00:28:10.669 Associated with SR-IOV VF: No 00:28:10.669 Max Data Transfer Size: Unlimited 00:28:10.669 Max Number of Namespaces: 1024 00:28:10.669 Max Number of I/O Queues: 128 00:28:10.669 NVMe Specification Version (VS): 1.3 00:28:10.669 NVMe Specification Version (Identify): 1.3 00:28:10.669 Maximum Queue Entries: 1024 00:28:10.669 Contiguous Queues Required: No 00:28:10.669 Arbitration Mechanisms Supported 00:28:10.669 Weighted Round Robin: Not Supported 00:28:10.669 Vendor Specific: Not Supported 00:28:10.669 Reset Timeout: 7500 ms 00:28:10.669 Doorbell Stride: 4 bytes 00:28:10.669 NVM Subsystem Reset: Not Supported 00:28:10.669 Command Sets Supported 00:28:10.669 NVM Command Set: Supported 00:28:10.669 Boot Partition: Not Supported 00:28:10.669 Memory Page Size Minimum: 4096 bytes 00:28:10.669 Memory Page Size Maximum: 4096 bytes 00:28:10.669 Persistent Memory Region: Not Supported 00:28:10.669 Optional Asynchronous Events Supported 00:28:10.669 Namespace Attribute Notices: Supported 00:28:10.669 Firmware Activation Notices: Not Supported 00:28:10.669 ANA Change Notices: Supported 00:28:10.669 PLE Aggregate Log Change Notices: Not Supported 00:28:10.669 LBA Status Info Alert Notices: Not Supported 00:28:10.669 EGE Aggregate Log Change Notices: Not Supported 00:28:10.669 Normal NVM Subsystem Shutdown event: Not Supported 00:28:10.669 Zone Descriptor Change Notices: Not Supported 00:28:10.669 Discovery Log Change Notices: Not Supported 00:28:10.669 Controller Attributes 00:28:10.669 128-bit Host Identifier: Supported 00:28:10.669 Non-Operational Permissive Mode: Not Supported 00:28:10.669 NVM Sets: Not Supported 00:28:10.669 Read Recovery Levels: Not Supported 00:28:10.669 Endurance Groups: Not Supported 00:28:10.669 Predictable Latency Mode: Not Supported 00:28:10.669 Traffic Based Keep ALive: Supported 00:28:10.669 Namespace Granularity: Not Supported 00:28:10.669 SQ Associations: Not Supported 00:28:10.669 UUID List: Not Supported 00:28:10.669 Multi-Domain Subsystem: Not Supported 00:28:10.669 Fixed Capacity Management: Not Supported 00:28:10.669 Variable Capacity Management: Not Supported 00:28:10.669 Delete Endurance Group: Not Supported 00:28:10.669 Delete NVM Set: Not Supported 00:28:10.669 Extended LBA Formats Supported: Not Supported 00:28:10.669 Flexible Data Placement Supported: Not Supported 00:28:10.669 00:28:10.669 Controller Memory Buffer Support 00:28:10.669 ================================ 00:28:10.669 Supported: No 00:28:10.669 00:28:10.669 Persistent Memory Region Support 00:28:10.669 ================================ 00:28:10.669 Supported: No 00:28:10.669 00:28:10.669 Admin Command Set Attributes 00:28:10.669 ============================ 00:28:10.669 Security Send/Receive: Not Supported 00:28:10.669 Format NVM: Not Supported 00:28:10.669 Firmware Activate/Download: Not Supported 00:28:10.669 Namespace Management: Not Supported 00:28:10.669 Device Self-Test: Not Supported 00:28:10.669 Directives: Not Supported 00:28:10.669 NVMe-MI: Not Supported 00:28:10.669 Virtualization Management: Not Supported 00:28:10.669 Doorbell Buffer Config: Not Supported 00:28:10.669 Get LBA Status Capability: Not Supported 00:28:10.669 Command & Feature Lockdown Capability: Not Supported 00:28:10.669 Abort Command Limit: 4 00:28:10.669 Async Event Request Limit: 4 00:28:10.669 Number of Firmware Slots: N/A 00:28:10.669 Firmware Slot 1 Read-Only: N/A 00:28:10.669 Firmware Activation Without Reset: N/A 00:28:10.669 Multiple Update Detection Support: N/A 00:28:10.669 Firmware Update Granularity: No Information Provided 00:28:10.669 Per-Namespace SMART Log: Yes 00:28:10.669 Asymmetric Namespace Access Log Page: Supported 00:28:10.669 ANA Transition Time : 10 sec 00:28:10.669 00:28:10.669 Asymmetric Namespace Access Capabilities 00:28:10.669 ANA Optimized State : Supported 00:28:10.669 ANA Non-Optimized State : Supported 00:28:10.669 ANA Inaccessible State : Supported 00:28:10.669 ANA Persistent Loss State : Supported 00:28:10.669 ANA Change State : Supported 00:28:10.669 ANAGRPID is not changed : No 00:28:10.669 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:10.669 00:28:10.669 ANA Group Identifier Maximum : 128 00:28:10.669 Number of ANA Group Identifiers : 128 00:28:10.669 Max Number of Allowed Namespaces : 1024 00:28:10.669 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:10.669 Command Effects Log Page: Supported 00:28:10.669 Get Log Page Extended Data: Supported 00:28:10.669 Telemetry Log Pages: Not Supported 00:28:10.669 Persistent Event Log Pages: Not Supported 00:28:10.669 Supported Log Pages Log Page: May Support 00:28:10.669 Commands Supported & Effects Log Page: Not Supported 00:28:10.669 Feature Identifiers & Effects Log Page:May Support 00:28:10.669 NVMe-MI Commands & Effects Log Page: May Support 00:28:10.669 Data Area 4 for Telemetry Log: Not Supported 00:28:10.669 Error Log Page Entries Supported: 128 00:28:10.669 Keep Alive: Supported 00:28:10.669 Keep Alive Granularity: 1000 ms 00:28:10.669 00:28:10.669 NVM Command Set Attributes 00:28:10.669 ========================== 00:28:10.669 Submission Queue Entry Size 00:28:10.669 Max: 64 00:28:10.669 Min: 64 00:28:10.669 Completion Queue Entry Size 00:28:10.669 Max: 16 00:28:10.669 Min: 16 00:28:10.669 Number of Namespaces: 1024 00:28:10.669 Compare Command: Not Supported 00:28:10.669 Write Uncorrectable Command: Not Supported 00:28:10.669 Dataset Management Command: Supported 00:28:10.669 Write Zeroes Command: Supported 00:28:10.669 Set Features Save Field: Not Supported 00:28:10.669 Reservations: Not Supported 00:28:10.669 Timestamp: Not Supported 00:28:10.669 Copy: Not Supported 00:28:10.669 Volatile Write Cache: Present 00:28:10.669 Atomic Write Unit (Normal): 1 00:28:10.669 Atomic Write Unit (PFail): 1 00:28:10.669 Atomic Compare & Write Unit: 1 00:28:10.669 Fused Compare & Write: Not Supported 00:28:10.669 Scatter-Gather List 00:28:10.669 SGL Command Set: Supported 00:28:10.669 SGL Keyed: Not Supported 00:28:10.669 SGL Bit Bucket Descriptor: Not Supported 00:28:10.669 SGL Metadata Pointer: Not Supported 00:28:10.669 Oversized SGL: Not Supported 00:28:10.669 SGL Metadata Address: Not Supported 00:28:10.669 SGL Offset: Supported 00:28:10.669 Transport SGL Data Block: Not Supported 00:28:10.670 Replay Protected Memory Block: Not Supported 00:28:10.670 00:28:10.670 Firmware Slot Information 00:28:10.670 ========================= 00:28:10.670 Active slot: 0 00:28:10.670 00:28:10.670 Asymmetric Namespace Access 00:28:10.670 =========================== 00:28:10.670 Change Count : 0 00:28:10.670 Number of ANA Group Descriptors : 1 00:28:10.670 ANA Group Descriptor : 0 00:28:10.670 ANA Group ID : 1 00:28:10.670 Number of NSID Values : 1 00:28:10.670 Change Count : 0 00:28:10.670 ANA State : 1 00:28:10.670 Namespace Identifier : 1 00:28:10.670 00:28:10.670 Commands Supported and Effects 00:28:10.670 ============================== 00:28:10.670 Admin Commands 00:28:10.670 -------------- 00:28:10.670 Get Log Page (02h): Supported 00:28:10.670 Identify (06h): Supported 00:28:10.670 Abort (08h): Supported 00:28:10.670 Set Features (09h): Supported 00:28:10.670 Get Features (0Ah): Supported 00:28:10.670 Asynchronous Event Request (0Ch): Supported 00:28:10.670 Keep Alive (18h): Supported 00:28:10.670 I/O Commands 00:28:10.670 ------------ 00:28:10.670 Flush (00h): Supported 00:28:10.670 Write (01h): Supported LBA-Change 00:28:10.670 Read (02h): Supported 00:28:10.670 Write Zeroes (08h): Supported LBA-Change 00:28:10.670 Dataset Management (09h): Supported 00:28:10.670 00:28:10.670 Error Log 00:28:10.670 ========= 00:28:10.670 Entry: 0 00:28:10.670 Error Count: 0x3 00:28:10.670 Submission Queue Id: 0x0 00:28:10.670 Command Id: 0x5 00:28:10.670 Phase Bit: 0 00:28:10.670 Status Code: 0x2 00:28:10.670 Status Code Type: 0x0 00:28:10.670 Do Not Retry: 1 00:28:10.670 Error Location: 0x28 00:28:10.670 LBA: 0x0 00:28:10.670 Namespace: 0x0 00:28:10.670 Vendor Log Page: 0x0 00:28:10.670 ----------- 00:28:10.670 Entry: 1 00:28:10.670 Error Count: 0x2 00:28:10.670 Submission Queue Id: 0x0 00:28:10.670 Command Id: 0x5 00:28:10.670 Phase Bit: 0 00:28:10.670 Status Code: 0x2 00:28:10.670 Status Code Type: 0x0 00:28:10.670 Do Not Retry: 1 00:28:10.670 Error Location: 0x28 00:28:10.670 LBA: 0x0 00:28:10.670 Namespace: 0x0 00:28:10.670 Vendor Log Page: 0x0 00:28:10.670 ----------- 00:28:10.670 Entry: 2 00:28:10.670 Error Count: 0x1 00:28:10.670 Submission Queue Id: 0x0 00:28:10.670 Command Id: 0x4 00:28:10.670 Phase Bit: 0 00:28:10.670 Status Code: 0x2 00:28:10.670 Status Code Type: 0x0 00:28:10.670 Do Not Retry: 1 00:28:10.670 Error Location: 0x28 00:28:10.670 LBA: 0x0 00:28:10.670 Namespace: 0x0 00:28:10.670 Vendor Log Page: 0x0 00:28:10.670 00:28:10.670 Number of Queues 00:28:10.670 ================ 00:28:10.670 Number of I/O Submission Queues: 128 00:28:10.670 Number of I/O Completion Queues: 128 00:28:10.670 00:28:10.670 ZNS Specific Controller Data 00:28:10.670 ============================ 00:28:10.670 Zone Append Size Limit: 0 00:28:10.670 00:28:10.670 00:28:10.670 Active Namespaces 00:28:10.670 ================= 00:28:10.670 get_feature(0x05) failed 00:28:10.670 Namespace ID:1 00:28:10.670 Command Set Identifier: NVM (00h) 00:28:10.670 Deallocate: Supported 00:28:10.670 Deallocated/Unwritten Error: Not Supported 00:28:10.670 Deallocated Read Value: Unknown 00:28:10.670 Deallocate in Write Zeroes: Not Supported 00:28:10.670 Deallocated Guard Field: 0xFFFF 00:28:10.670 Flush: Supported 00:28:10.670 Reservation: Not Supported 00:28:10.670 Namespace Sharing Capabilities: Multiple Controllers 00:28:10.670 Size (in LBAs): 1953525168 (931GiB) 00:28:10.670 Capacity (in LBAs): 1953525168 (931GiB) 00:28:10.670 Utilization (in LBAs): 1953525168 (931GiB) 00:28:10.670 UUID: 9292701f-ce61-4068-be99-3277afc99622 00:28:10.670 Thin Provisioning: Not Supported 00:28:10.670 Per-NS Atomic Units: Yes 00:28:10.670 Atomic Boundary Size (Normal): 0 00:28:10.670 Atomic Boundary Size (PFail): 0 00:28:10.670 Atomic Boundary Offset: 0 00:28:10.670 NGUID/EUI64 Never Reused: No 00:28:10.670 ANA group ID: 1 00:28:10.670 Namespace Write Protected: No 00:28:10.670 Number of LBA Formats: 1 00:28:10.670 Current LBA Format: LBA Format #00 00:28:10.670 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:10.670 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.670 rmmod nvme_tcp 00:28:10.670 rmmod nvme_fabrics 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.670 00:54:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:13.208 00:54:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:15.746 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:15.746 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:16.681 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:28:16.681 00:28:16.681 real 0m16.546s 00:28:16.681 user 0m3.935s 00:28:16.681 sys 0m8.810s 00:28:16.681 00:54:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.681 00:54:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.681 ************************************ 00:28:16.681 END TEST nvmf_identify_kernel_target 00:28:16.681 ************************************ 00:28:16.681 00:54:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:16.681 00:54:34 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:16.681 00:54:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:16.681 00:54:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.681 00:54:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:16.681 ************************************ 00:28:16.681 START TEST nvmf_auth_host 00:28:16.681 ************************************ 00:28:16.681 00:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:16.940 * Looking for test storage... 00:28:16.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.940 00:54:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.941 00:54:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:23.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:23.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:23.521 Found net devices under 0000:af:00.0: cvl_0_0 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:23.521 Found net devices under 0000:af:00.1: cvl_0_1 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:23.521 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:23.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:23.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:28:23.522 00:28:23.522 --- 10.0.0.2 ping statistics --- 00:28:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.522 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:23.522 00:28:23.522 --- 10.0.0.1 ping statistics --- 00:28:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.522 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3192647 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3192647 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3192647 ']' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.522 00:54:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6f5f4847e8898248c9624c894b9f929e 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ycl 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6f5f4847e8898248c9624c894b9f929e 0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6f5f4847e8898248c9624c894b9f929e 0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6f5f4847e8898248c9624c894b9f929e 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ycl 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ycl 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ycl 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ebe85c2a2260848de734b9d61833258b26f1fbfe6a6cc7887334a70c97a75b93 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aRr 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ebe85c2a2260848de734b9d61833258b26f1fbfe6a6cc7887334a70c97a75b93 3 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ebe85c2a2260848de734b9d61833258b26f1fbfe6a6cc7887334a70c97a75b93 3 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ebe85c2a2260848de734b9d61833258b26f1fbfe6a6cc7887334a70c97a75b93 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aRr 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aRr 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.aRr 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eee5b09efd0f8761773f32a8761819fce6f63a286afda99b 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9QK 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eee5b09efd0f8761773f32a8761819fce6f63a286afda99b 0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eee5b09efd0f8761773f32a8761819fce6f63a286afda99b 0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eee5b09efd0f8761773f32a8761819fce6f63a286afda99b 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9QK 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9QK 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9QK 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b319faa28f7e8cd30c22df96dc07b222bb45c05d7f191141 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GXG 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b319faa28f7e8cd30c22df96dc07b222bb45c05d7f191141 2 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b319faa28f7e8cd30c22df96dc07b222bb45c05d7f191141 2 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b319faa28f7e8cd30c22df96dc07b222bb45c05d7f191141 00:28:24.090 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GXG 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GXG 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GXG 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5a43bca903f6a7bf53f7dd24a2a4a016 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.EJf 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5a43bca903f6a7bf53f7dd24a2a4a016 1 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5a43bca903f6a7bf53f7dd24a2a4a016 1 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5a43bca903f6a7bf53f7dd24a2a4a016 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:24.350 00:54:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.EJf 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.EJf 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EJf 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58c7111d66927fe67a7a74002467b487 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gC1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58c7111d66927fe67a7a74002467b487 1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58c7111d66927fe67a7a74002467b487 1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58c7111d66927fe67a7a74002467b487 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gC1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gC1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gC1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4c1f629faccbc594a3686efe02ca1ea8bbcb336e70eb2b2 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.L5p 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4c1f629faccbc594a3686efe02ca1ea8bbcb336e70eb2b2 2 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4c1f629faccbc594a3686efe02ca1ea8bbcb336e70eb2b2 2 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4c1f629faccbc594a3686efe02ca1ea8bbcb336e70eb2b2 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.350 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.L5p 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.L5p 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.L5p 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=388d61ebb2af2c8c03cca83871006f3b 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.c5d 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 388d61ebb2af2c8c03cca83871006f3b 0 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 388d61ebb2af2c8c03cca83871006f3b 0 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=388d61ebb2af2c8c03cca83871006f3b 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.c5d 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.c5d 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.c5d 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3c5ef1689643e9f7595d7f9315a9761dcab74054a2f9e7d2c154b6d5650575e7 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BqF 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3c5ef1689643e9f7595d7f9315a9761dcab74054a2f9e7d2c154b6d5650575e7 3 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3c5ef1689643e9f7595d7f9315a9761dcab74054a2f9e7d2c154b6d5650575e7 3 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3c5ef1689643e9f7595d7f9315a9761dcab74054a2f9e7d2c154b6d5650575e7 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BqF 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BqF 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BqF 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3192647 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3192647 ']' 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.609 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ycl 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.aRr ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aRr 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9QK 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GXG ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GXG 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EJf 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gC1 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gC1 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.L5p 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.c5d ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.c5d 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BqF 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.868 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:24.869 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:25.127 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:25.127 00:54:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:27.658 Waiting for block devices as requested 00:28:27.658 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:28:27.916 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:27.916 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:27.916 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:28.175 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:28.175 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:28.175 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:28.175 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:28.434 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:28.434 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:28.434 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:28.434 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:28.693 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:28.693 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:28.693 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:28.951 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:28.951 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:29.519 00:54:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:29.778 No valid GPT data, bailing 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:29.778 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:29.779 00:28:29.779 Discovery Log Number of Records 2, Generation counter 2 00:28:29.779 =====Discovery Log Entry 0====== 00:28:29.779 trtype: tcp 00:28:29.779 adrfam: ipv4 00:28:29.779 subtype: current discovery subsystem 00:28:29.779 treq: not specified, sq flow control disable supported 00:28:29.779 portid: 1 00:28:29.779 trsvcid: 4420 00:28:29.779 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:29.779 traddr: 10.0.0.1 00:28:29.779 eflags: none 00:28:29.779 sectype: none 00:28:29.779 =====Discovery Log Entry 1====== 00:28:29.779 trtype: tcp 00:28:29.779 adrfam: ipv4 00:28:29.779 subtype: nvme subsystem 00:28:29.779 treq: not specified, sq flow control disable supported 00:28:29.779 portid: 1 00:28:29.779 trsvcid: 4420 00:28:29.779 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:29.779 traddr: 10.0.0.1 00:28:29.779 eflags: none 00:28:29.779 sectype: none 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.779 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.038 nvme0n1 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.038 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.039 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 nvme0n1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.298 00:54:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.557 nvme0n1 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.557 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.558 nvme0n1 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.558 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.817 nvme0n1 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.817 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.077 nvme0n1 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.077 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.336 00:54:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.336 nvme0n1 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.336 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.595 nvme0n1 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.595 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.855 nvme0n1 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.855 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.115 nvme0n1 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.115 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.374 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.375 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.375 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.375 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.375 00:54:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 nvme0n1 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.375 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.634 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.893 nvme0n1 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.893 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.894 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.153 nvme0n1 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.153 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.412 00:54:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 nvme0n1 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.672 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.932 nvme0n1 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.932 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.191 nvme0n1 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.191 00:54:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.191 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.191 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.191 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.191 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.450 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.709 nvme0n1 00:28:34.709 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.709 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.709 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.709 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.709 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.968 00:54:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 nvme0n1 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.536 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.537 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.103 nvme0n1 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.103 00:54:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.362 nvme0n1 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.363 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.622 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.191 nvme0n1 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.191 00:54:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.759 nvme0n1 00:28:37.759 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.759 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.760 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.019 00:54:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.587 nvme0n1 00:28:38.587 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.587 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.587 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.587 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.587 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.846 00:54:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.414 nvme0n1 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.414 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.673 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.674 00:54:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.242 nvme0n1 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.242 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:40.501 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.502 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 nvme0n1 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.438 00:54:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 nvme0n1 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.438 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.439 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.698 nvme0n1 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.698 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.699 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.958 nvme0n1 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.958 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 nvme0n1 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.218 00:54:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.477 nvme0n1 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.477 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.478 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.737 nvme0n1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.737 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.997 nvme0n1 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.997 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.257 nvme0n1 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.257 00:55:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 nvme0n1 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.517 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.776 nvme0n1 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.776 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.777 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.073 nvme0n1 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:44.073 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.074 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.334 00:55:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.592 nvme0n1 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.592 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.851 nvme0n1 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.851 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.109 nvme0n1 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.109 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.367 00:55:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.626 nvme0n1 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.626 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.193 nvme0n1 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:46.193 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.194 00:55:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.760 nvme0n1 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.760 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 nvme0n1 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.327 00:55:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.893 nvme0n1 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.893 00:55:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.457 nvme0n1 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.457 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.458 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.388 nvme0n1 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.388 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.389 00:55:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 nvme0n1 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.956 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.215 00:55:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.782 nvme0n1 00:28:50.782 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.782 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.782 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.782 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.782 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:51.040 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.041 00:55:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.977 nvme0n1 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.977 00:55:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.543 nvme0n1 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.543 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.802 nvme0n1 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.802 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:53.061 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.062 nvme0n1 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.062 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.321 nvme0n1 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.321 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.581 nvme0n1 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.581 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.840 nvme0n1 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:53.840 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.841 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.100 nvme0n1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.100 00:55:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.359 nvme0n1 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.359 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.619 nvme0n1 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.619 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.878 nvme0n1 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.878 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.879 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.139 nvme0n1 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.139 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.399 00:55:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.662 nvme0n1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.662 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.920 nvme0n1 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.920 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.179 nvme0n1 00:28:56.179 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.179 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.179 00:55:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.179 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.179 00:55:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.179 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.438 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.697 nvme0n1 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.697 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.956 nvme0n1 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.956 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.957 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.216 00:55:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.475 nvme0n1 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.475 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.734 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.735 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.994 nvme0n1 00:28:57.994 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.994 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.994 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.994 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.994 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.254 00:55:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.823 nvme0n1 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.823 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.824 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 nvme0n1 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.392 00:55:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.392 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.960 nvme0n1 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmY1ZjQ4NDdlODg5ODI0OGM5NjI0Yzg5NGI5ZjkyOWU1JDBS: 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWJlODVjMmEyMjYwODQ4ZGU3MzRiOWQ2MTgzMzI1OGIyNmYxZmJmZTZhNmNjNzg4NzMzNGE3MGM5N2E3NWI5M6J4WL0=: 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.960 00:55:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.894 nvme0n1 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.894 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.895 00:55:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 nvme0n1 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWE0M2JjYTkwM2Y2YTdiZjUzZjdkZDI0YTJhNGEwMTZSS45u: 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThjNzExMWQ2NjkyN2ZlNjdhN2E3NDAwMjQ2N2I0ODeYDRam: 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.461 00:55:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.437 nvme0n1 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDRjMWY2MjlmYWNjYmM1OTRhMzY4NmVmZTAyY2ExZWE4YmJjYjMzNmU3MGViMmIym7Pnig==: 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzg4ZDYxZWJiMmFmMmM4YzAzY2NhODM4NzEwMDZmM2ISLmav: 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.437 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.003 nvme0n1 00:29:03.003 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.003 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.003 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.003 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.003 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2M1ZWYxNjg5NjQzZTlmNzU5NWQ3ZjkzMTVhOTc2MWRjYWI3NDA1NGEyZjllN2QyYzE1NGI2ZDU2NTA1NzVlN2sWiCg=: 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.261 00:55:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 nvme0n1 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWVlNWIwOWVmZDBmODc2MTc3M2YzMmE4NzYxODE5ZmNlNmY2M2EyODZhZmRhOTli5pu6QA==: 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjMxOWZhYTI4ZjdlOGNkMzBjMjJkZjk2ZGMwN2IyMjJiYjQ1YzA1ZDdmMTkxMTQxHR77jw==: 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 request: 00:29:04.195 { 00:29:04.195 "name": "nvme0", 00:29:04.195 "trtype": "tcp", 00:29:04.195 "traddr": "10.0.0.1", 00:29:04.195 "adrfam": "ipv4", 00:29:04.195 "trsvcid": "4420", 00:29:04.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:04.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:04.195 "prchk_reftag": false, 00:29:04.195 "prchk_guard": false, 00:29:04.195 "hdgst": false, 00:29:04.195 "ddgst": false, 00:29:04.195 "method": "bdev_nvme_attach_controller", 00:29:04.195 "req_id": 1 00:29:04.195 } 00:29:04.195 Got JSON-RPC error response 00:29:04.195 response: 00:29:04.195 { 00:29:04.195 "code": -5, 00:29:04.195 "message": "Input/output error" 00:29:04.195 } 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:04.195 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 request: 00:29:04.196 { 00:29:04.196 "name": "nvme0", 00:29:04.196 "trtype": "tcp", 00:29:04.196 "traddr": "10.0.0.1", 00:29:04.196 "adrfam": "ipv4", 00:29:04.196 "trsvcid": "4420", 00:29:04.196 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:04.196 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:04.196 "prchk_reftag": false, 00:29:04.196 "prchk_guard": false, 00:29:04.196 "hdgst": false, 00:29:04.196 "ddgst": false, 00:29:04.196 "dhchap_key": "key2", 00:29:04.196 "method": "bdev_nvme_attach_controller", 00:29:04.196 "req_id": 1 00:29:04.196 } 00:29:04.196 Got JSON-RPC error response 00:29:04.196 response: 00:29:04.196 { 00:29:04.196 "code": -5, 00:29:04.196 "message": "Input/output error" 00:29:04.196 } 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.196 00:55:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.196 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.455 request: 00:29:04.455 { 00:29:04.455 "name": "nvme0", 00:29:04.455 "trtype": "tcp", 00:29:04.455 "traddr": "10.0.0.1", 00:29:04.455 "adrfam": "ipv4", 00:29:04.455 "trsvcid": "4420", 00:29:04.455 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:04.455 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:04.455 "prchk_reftag": false, 00:29:04.455 "prchk_guard": false, 00:29:04.455 "hdgst": false, 00:29:04.455 "ddgst": false, 00:29:04.455 "dhchap_key": "key1", 00:29:04.455 "dhchap_ctrlr_key": "ckey2", 00:29:04.455 "method": "bdev_nvme_attach_controller", 00:29:04.455 "req_id": 1 00:29:04.455 } 00:29:04.455 Got JSON-RPC error response 00:29:04.455 response: 00:29:04.455 { 00:29:04.455 "code": -5, 00:29:04.455 "message": "Input/output error" 00:29:04.455 } 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.455 rmmod nvme_tcp 00:29:04.455 rmmod nvme_fabrics 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3192647 ']' 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3192647 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3192647 ']' 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3192647 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3192647 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3192647' 00:29:04.455 killing process with pid 3192647 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3192647 00:29:04.455 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3192647 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.714 00:55:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:07.257 00:55:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:09.792 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:09.792 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:10.730 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:29:10.730 00:55:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ycl /tmp/spdk.key-null.9QK /tmp/spdk.key-sha256.EJf /tmp/spdk.key-sha384.L5p /tmp/spdk.key-sha512.BqF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:10.730 00:55:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:13.268 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:13.268 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:13.268 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:13.528 00:29:13.528 real 0m56.724s 00:29:13.528 user 0m52.062s 00:29:13.528 sys 0m12.600s 00:29:13.528 00:55:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.528 00:55:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 ************************************ 00:29:13.528 END TEST nvmf_auth_host 00:29:13.528 ************************************ 00:29:13.528 00:55:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:13.528 00:55:31 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:13.528 00:55:31 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:13.528 00:55:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:13.528 00:55:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.528 00:55:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 ************************************ 00:29:13.528 START TEST nvmf_digest 00:29:13.528 ************************************ 00:29:13.528 00:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:13.528 * Looking for test storage... 00:29:13.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:13.788 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:13.789 00:55:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:19.117 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:19.118 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:19.118 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:19.118 Found net devices under 0000:af:00.0: cvl_0_0 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:19.118 Found net devices under 0000:af:00.1: cvl_0_1 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.118 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.378 00:55:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:19.378 00:29:19.378 --- 10.0.0.2 ping statistics --- 00:29:19.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.378 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:19.378 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:29:19.637 00:29:19.637 --- 10.0.0.1 ping statistics --- 00:29:19.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.637 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:19.637 ************************************ 00:29:19.637 START TEST nvmf_digest_clean 00:29:19.637 ************************************ 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:19.637 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3207604 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3207604 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3207604 ']' 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.638 00:55:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.638 [2024-07-16 00:55:37.348219] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:19.638 [2024-07-16 00:55:37.348284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.638 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.638 [2024-07-16 00:55:37.438633] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.897 [2024-07-16 00:55:37.527538] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.897 [2024-07-16 00:55:37.527580] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.897 [2024-07-16 00:55:37.527590] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.897 [2024-07-16 00:55:37.527600] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.897 [2024-07-16 00:55:37.527608] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.897 [2024-07-16 00:55:37.527630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.464 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.465 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:20.465 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.465 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:20.465 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.723 null0 00:29:20.723 [2024-07-16 00:55:38.412772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.723 [2024-07-16 00:55:38.436947] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3207884 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3207884 /var/tmp/bperf.sock 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3207884 ']' 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.723 00:55:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.723 [2024-07-16 00:55:38.494275] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:20.723 [2024-07-16 00:55:38.494333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207884 ] 00:29:20.723 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.981 [2024-07-16 00:55:38.576871] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.981 [2024-07-16 00:55:38.682694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.914 00:55:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.172 nvme0n1 00:29:22.429 00:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:22.429 00:55:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.429 Running I/O for 2 seconds... 00:29:24.955 00:29:24.955 Latency(us) 00:29:24.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:24.955 nvme0n1 : 2.05 13786.44 53.85 0.00 0.00 9098.95 5034.36 48854.11 00:29:24.955 =================================================================================================================== 00:29:24.955 Total : 13786.44 53.85 0.00 0.00 9098.95 5034.36 48854.11 00:29:24.955 0 00:29:24.955 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.956 | select(.opcode=="crc32c") 00:29:24.956 | "\(.module_name) \(.executed)"' 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3207884 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3207884 ']' 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3207884 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3207884 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3207884' 00:29:24.956 killing process with pid 3207884 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3207884 00:29:24.956 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.956 00:29:24.956 Latency(us) 00:29:24.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.956 =================================================================================================================== 00:29:24.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3207884 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3208679 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3208679 /var/tmp/bperf.sock 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3208679 ']' 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.956 00:55:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.214 [2024-07-16 00:55:42.799804] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:25.214 [2024-07-16 00:55:42.799865] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208679 ] 00:29:25.214 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:25.214 Zero copy mechanism will not be used. 00:29:25.214 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.214 [2024-07-16 00:55:42.882700] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.214 [2024-07-16 00:55:42.982672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.148 00:55:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.714 nvme0n1 00:29:26.714 00:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:26.714 00:55:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.714 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.714 Zero copy mechanism will not be used. 00:29:26.714 Running I/O for 2 seconds... 00:29:29.248 00:29:29.248 Latency(us) 00:29:29.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.248 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:29.248 nvme0n1 : 2.00 3606.37 450.80 0.00 0.00 4431.14 1288.38 9770.82 00:29:29.248 =================================================================================================================== 00:29:29.248 Total : 3606.37 450.80 0.00 0.00 4431.14 1288.38 9770.82 00:29:29.248 0 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:29.248 | select(.opcode=="crc32c") 00:29:29.248 | "\(.module_name) \(.executed)"' 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3208679 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3208679 ']' 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3208679 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3208679 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3208679' 00:29:29.248 killing process with pid 3208679 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3208679 00:29:29.248 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.248 00:29:29.248 Latency(us) 00:29:29.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.248 =================================================================================================================== 00:29:29.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.248 00:55:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3208679 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3209352 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3209352 /var/tmp/bperf.sock 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3209352 ']' 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.248 00:55:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:29.248 [2024-07-16 00:55:47.053200] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:29.248 [2024-07-16 00:55:47.053268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209352 ] 00:29:29.248 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.507 [2024-07-16 00:55:47.136204] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.507 [2024-07-16 00:55:47.242404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.444 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.444 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:30.444 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:30.444 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:30.444 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:30.703 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.703 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.962 nvme0n1 00:29:30.962 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:30.962 00:55:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.962 Running I/O for 2 seconds... 00:29:33.498 00:29:33.498 Latency(us) 00:29:33.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.498 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.498 nvme0n1 : 2.01 17986.69 70.26 0.00 0.00 7102.26 3589.59 15847.80 00:29:33.498 =================================================================================================================== 00:29:33.498 Total : 17986.69 70.26 0.00 0.00 7102.26 3589.59 15847.80 00:29:33.498 0 00:29:33.498 00:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.498 00:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:33.498 00:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.498 00:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.498 | select(.opcode=="crc32c") 00:29:33.498 | "\(.module_name) \(.executed)"' 00:29:33.498 00:55:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3209352 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3209352 ']' 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3209352 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209352 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209352' 00:29:33.498 killing process with pid 3209352 00:29:33.498 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3209352 00:29:33.498 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.498 00:29:33.498 Latency(us) 00:29:33.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.498 =================================================================================================================== 00:29:33.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.499 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3209352 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3210034 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3210034 /var/tmp/bperf.sock 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3210034 ']' 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.758 00:55:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:33.758 [2024-07-16 00:55:51.421087] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:33.758 [2024-07-16 00:55:51.421152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210034 ] 00:29:33.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.758 Zero copy mechanism will not be used. 00:29:33.758 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.758 [2024-07-16 00:55:51.505710] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.016 [2024-07-16 00:55:51.606401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.583 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.583 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:34.583 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:34.583 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:34.583 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:35.153 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.153 00:55:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.412 nvme0n1 00:29:35.412 00:55:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:35.412 00:55:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.412 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.412 Zero copy mechanism will not be used. 00:29:35.412 Running I/O for 2 seconds... 00:29:37.942 00:29:37.942 Latency(us) 00:29:37.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.942 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:37.942 nvme0n1 : 2.00 4507.19 563.40 0.00 0.00 3542.40 2144.81 14298.76 00:29:37.942 =================================================================================================================== 00:29:37.942 Total : 4507.19 563.40 0.00 0.00 3542.40 2144.81 14298.76 00:29:37.942 0 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:37.942 | select(.opcode=="crc32c") 00:29:37.942 | "\(.module_name) \(.executed)"' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3210034 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3210034 ']' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3210034 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3210034 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3210034' 00:29:37.942 killing process with pid 3210034 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3210034 00:29:37.942 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.942 00:29:37.942 Latency(us) 00:29:37.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.942 =================================================================================================================== 00:29:37.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3210034 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3207604 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3207604 ']' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3207604 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.942 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3207604 00:29:38.202 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:38.202 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:38.202 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3207604' 00:29:38.202 killing process with pid 3207604 00:29:38.202 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3207604 00:29:38.202 00:55:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3207604 00:29:38.202 00:29:38.202 real 0m18.723s 00:29:38.202 user 0m37.639s 00:29:38.202 sys 0m4.278s 00:29:38.202 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.202 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.202 ************************************ 00:29:38.202 END TEST nvmf_digest_clean 00:29:38.202 ************************************ 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:38.462 ************************************ 00:29:38.462 START TEST nvmf_digest_error 00:29:38.462 ************************************ 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3210919 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3210919 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3210919 ']' 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:38.462 00:55:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.462 [2024-07-16 00:55:56.143584] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:38.462 [2024-07-16 00:55:56.143639] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.462 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.462 [2024-07-16 00:55:56.232517] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.721 [2024-07-16 00:55:56.319991] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.721 [2024-07-16 00:55:56.320034] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.721 [2024-07-16 00:55:56.320044] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.721 [2024-07-16 00:55:56.320053] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.721 [2024-07-16 00:55:56.320060] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.721 [2024-07-16 00:55:56.320089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.289 [2024-07-16 00:55:57.122516] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:39.289 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.549 null0 00:29:39.549 [2024-07-16 00:55:57.222266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.549 [2024-07-16 00:55:57.246449] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3211123 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3211123 /var/tmp/bperf.sock 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3211123 ']' 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:39.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.549 00:55:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.549 [2024-07-16 00:55:57.302541] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:39.549 [2024-07-16 00:55:57.302597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211123 ] 00:29:39.549 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.549 [2024-07-16 00:55:57.386590] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.808 [2024-07-16 00:55:57.495126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.376 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.376 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:40.376 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.376 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.652 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.218 nvme0n1 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.218 00:55:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.218 Running I/O for 2 seconds... 00:29:41.476 [2024-07-16 00:55:59.059898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.059948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.059966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.081355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.081397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.081414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.094884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.094917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.094932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.114837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.114871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.114886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.133367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.133402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.133418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.148237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.148277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.148299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.167225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.167267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.167284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.186268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.186301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.186316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.201462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.201495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.201510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.222395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.222429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.222444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.237962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.237996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.238011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.258896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.258932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.258948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.276201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.276236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.276251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.289476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.289523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.476 [2024-07-16 00:55:59.308356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.476 [2024-07-16 00:55:59.308396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.476 [2024-07-16 00:55:59.308411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.325131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.325170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.340199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.340235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.340251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.358090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.358124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.358139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.372962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.372995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.373010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.394063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.394096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.394112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.409625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.409658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.409673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.430753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.430786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.430801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.445927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.445960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.445974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.468364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.468399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.488659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.488693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.488708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.504783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.504816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.504831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.524788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.524820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.524835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.540889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.540920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.540935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.735 [2024-07-16 00:55:59.562338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.735 [2024-07-16 00:55:59.562372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.735 [2024-07-16 00:55:59.562386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.994 [2024-07-16 00:55:59.576754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.994 [2024-07-16 00:55:59.576786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.994 [2024-07-16 00:55:59.576801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.994 [2024-07-16 00:55:59.597060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.994 [2024-07-16 00:55:59.597094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.994 [2024-07-16 00:55:59.597110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.994 [2024-07-16 00:55:59.613229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.994 [2024-07-16 00:55:59.613268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.994 [2024-07-16 00:55:59.613289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.994 [2024-07-16 00:55:59.633785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.994 [2024-07-16 00:55:59.633818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.994 [2024-07-16 00:55:59.633833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.994 [2024-07-16 00:55:59.649172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.994 [2024-07-16 00:55:59.649203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.649218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.670134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.670184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.685553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.685585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.685601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.706422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.706454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.706469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.725720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.725753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.725768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.741406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.741440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.741455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.762560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.762593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.762608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.777016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.777048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.777062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.798014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.798047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.798061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.995 [2024-07-16 00:55:59.819577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:41.995 [2024-07-16 00:55:59.819611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.995 [2024-07-16 00:55:59.819626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.839081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.839115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.839130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.855091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.855125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.876001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.876035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.876051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.891752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.891784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.891799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.912900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.912934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.912949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.928312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.928352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.928373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.949663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.949696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.949711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.965139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.254 [2024-07-16 00:55:59.965171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.254 [2024-07-16 00:55:59.965186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.254 [2024-07-16 00:55:59.985650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:55:59.985682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:55:59.985697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.255 [2024-07-16 00:56:00.001005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:56:00.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:56:00.001052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.255 [2024-07-16 00:56:00.024293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:56:00.024332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:56:00.024349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.255 [2024-07-16 00:56:00.047661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:56:00.047695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:56:00.047711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.255 [2024-07-16 00:56:00.069166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:56:00.069201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:56:00.069218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.255 [2024-07-16 00:56:00.084381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.255 [2024-07-16 00:56:00.084415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.255 [2024-07-16 00:56:00.084431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.513 [2024-07-16 00:56:00.104074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.513 [2024-07-16 00:56:00.104115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.513 [2024-07-16 00:56:00.104131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.513 [2024-07-16 00:56:00.124020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.513 [2024-07-16 00:56:00.124053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.513 [2024-07-16 00:56:00.124068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.513 [2024-07-16 00:56:00.140896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.513 [2024-07-16 00:56:00.140929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.513 [2024-07-16 00:56:00.140944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.159529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.159560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.159575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.177050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.177083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.177098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.191989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.192022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.192036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.211852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.211886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.211902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.226537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.226570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.226585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.245611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.245642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.245657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.267763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.267797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.267812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.284082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.284117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.284133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.297950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.297983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.311744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.311776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.311791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.330401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.330433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.330448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.514 [2024-07-16 00:56:00.349377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.514 [2024-07-16 00:56:00.349410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.514 [2024-07-16 00:56:00.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.363639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.363672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.363687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.386039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.386071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.386086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.405833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.405865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.405885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.427845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.427876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.427892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.448735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.448767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.448782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.464753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.464784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.464799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.480202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.480233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.480248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.500927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.500962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.500977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.515763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.515796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.515812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.534202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.534235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.534250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.550097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.550130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.550145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.571019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.571062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.571079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.772 [2024-07-16 00:56:00.592908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:42.772 [2024-07-16 00:56:00.592942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.772 [2024-07-16 00:56:00.592958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.612804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.612839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.612855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.631040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.631073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.631088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.650250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.650290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.650304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.663902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.663935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.682189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.682223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.682237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.701498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.701532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.701547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.715925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.715959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.715979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.735173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.735207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.735222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.754053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.754085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.754100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.768921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.768953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.768968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.784267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.784299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.784315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.798690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.798722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.816500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.816532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.816546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.832056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.832089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.832104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.850785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.850817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.850832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.032 [2024-07-16 00:56:00.867413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.032 [2024-07-16 00:56:00.867450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.032 [2024-07-16 00:56:00.867465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.887354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.887388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.904694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.904728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.904743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.918682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.918716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.918731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.931989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.932021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.932036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.947211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.947244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.947269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.964640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.964674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.964688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:00.980054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:00.980088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:00.980102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:01.001688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:01.001721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:01.001736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:01.016204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:01.016235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:01.016250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 [2024-07-16 00:56:01.034492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134b6a0) 00:29:43.291 [2024-07-16 00:56:01.034526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.291 [2024-07-16 00:56:01.034540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.291 00:29:43.291 Latency(us) 00:29:43.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.291 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:43.291 nvme0n1 : 2.01 14239.72 55.62 0.00 0.00 8975.58 4706.68 30980.65 00:29:43.291 =================================================================================================================== 00:29:43.291 Total : 14239.72 55.62 0.00 0.00 8975.58 4706.68 30980.65 00:29:43.291 0 00:29:43.291 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:43.291 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:43.291 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:43.291 | .driver_specific 00:29:43.291 | .nvme_error 00:29:43.291 | .status_code 00:29:43.291 | .command_transient_transport_error' 00:29:43.291 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3211123 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3211123 ']' 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3211123 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3211123 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3211123' 00:29:43.550 killing process with pid 3211123 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3211123 00:29:43.550 Received shutdown signal, test time was about 2.000000 seconds 00:29:43.550 00:29:43.550 Latency(us) 00:29:43.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.550 =================================================================================================================== 00:29:43.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.550 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3211123 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3211912 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3211912 /var/tmp/bperf.sock 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3211912 ']' 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:43.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:43.809 00:56:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:43.809 [2024-07-16 00:56:01.643039] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:43.809 [2024-07-16 00:56:01.643103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211912 ] 00:29:43.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:43.809 Zero copy mechanism will not be used. 00:29:44.067 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.067 [2024-07-16 00:56:01.726723] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.067 [2024-07-16 00:56:01.823277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.004 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.004 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:45.004 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.004 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.262 00:56:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.521 nvme0n1 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.521 00:56:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:45.521 Zero copy mechanism will not be used. 00:29:45.521 Running I/O for 2 seconds... 00:29:45.521 [2024-07-16 00:56:03.347378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.521 [2024-07-16 00:56:03.347429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.521 [2024-07-16 00:56:03.347448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.521 [2024-07-16 00:56:03.355885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.521 [2024-07-16 00:56:03.355924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.521 [2024-07-16 00:56:03.355940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.365567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.365602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.365618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.374776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.374810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.374826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.383641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.383675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.383690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.393146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.393179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.393194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.403026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.403059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.403075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.413044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.413085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.413101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.422803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.422837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.432253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.432299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.441399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.441431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.441446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.450268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.450301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.450316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.459245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.459285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.459301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.468946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.468980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.468995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.478442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.478474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.478489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.487655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.487687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.487701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.496987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.497021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.497036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.506191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.506223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.506238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.514619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.514651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.514667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.523595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.523629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.523644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.532476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.532508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.532523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.541728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.541762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.541778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.550491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.550524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.550540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.559141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.559173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.559189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.567967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.568000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.568020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.576519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.576552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.576567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.585407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.585440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.585455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:45.780 [2024-07-16 00:56:03.594490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.780 [2024-07-16 00:56:03.594523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.780 [2024-07-16 00:56:03.594538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:45.781 [2024-07-16 00:56:03.603079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.781 [2024-07-16 00:56:03.603113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.781 [2024-07-16 00:56:03.603127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:45.781 [2024-07-16 00:56:03.611418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:45.781 [2024-07-16 00:56:03.611452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.781 [2024-07-16 00:56:03.611468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.619541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.619574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.619590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.627983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.628030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.636032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.636080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.644313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.644349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.644364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.652585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.652617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.652632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.660787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.660819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.660833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.669186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.669217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.669232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.677447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.677479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.677493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.685770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.685801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.685815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.694022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.694053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.694068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.702153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.702185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.702200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.710179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.710211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.710226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.718292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.718325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.718340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.726824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.726856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.726871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.735021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.735053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.735067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.743117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.743149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.743164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.751332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.751377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.759711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.759742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.759757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.767925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.767960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.767974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.776528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.776560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.776575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.785158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.785189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.785210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.793652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.793683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.802062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.802094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.802109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.810462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.810508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.040 [2024-07-16 00:56:03.818629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.040 [2024-07-16 00:56:03.818663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-07-16 00:56:03.818677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.826990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.827023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.827038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.835249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.835289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.835303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.843601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.843632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.843647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.852374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.852406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.860839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.860872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.860886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.869284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.869316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.869331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.041 [2024-07-16 00:56:03.877833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.041 [2024-07-16 00:56:03.877866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-07-16 00:56:03.877880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.886276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.886309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.886324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.894585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.894618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.902973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.903006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.903020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.911101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.911133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.911148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.919471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.919504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.919518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.927736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.927768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.927788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.936090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.936121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.936136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.944685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.944716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.944730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.952827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.952858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.952873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.961008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.961038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.961053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.969128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.969173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.977232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.977272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.977288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.985381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.985412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.985426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:03.993564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:03.993595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:03.993616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:04.001702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:04.001739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:04.001754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.300 [2024-07-16 00:56:04.009844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.300 [2024-07-16 00:56:04.009875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.300 [2024-07-16 00:56:04.009890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.018038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.018069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.018084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.026282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.026314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.026330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.034481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.034512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.034527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.042671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.042702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.042717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.050727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.050759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.050774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.058793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.058824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.058838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.066866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.066897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.074901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.074934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.074949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.083109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.083141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.091203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.091234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.091249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.099248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.099290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.099304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.107294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.107325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.107340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.115295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.115327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.115342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.123388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.123420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.123435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.301 [2024-07-16 00:56:04.131316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.301 [2024-07-16 00:56:04.131348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.301 [2024-07-16 00:56:04.131363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.139289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.139322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.560 [2024-07-16 00:56:04.139347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.147337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.147369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.560 [2024-07-16 00:56:04.147384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.155333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.155365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.560 [2024-07-16 00:56:04.155380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.163335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.163368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.560 [2024-07-16 00:56:04.163382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.171274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.171306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.560 [2024-07-16 00:56:04.171321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.560 [2024-07-16 00:56:04.179090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.560 [2024-07-16 00:56:04.179122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.179136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.187102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.187134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.187148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.195180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.195212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.195227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.203436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.203467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.203483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.211537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.211573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.211587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.219709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.219742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.219757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.227858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.227889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.227904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.235917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.235951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.235966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.244080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.244112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.244127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.252185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.252216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.252231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.260285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.260316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.260330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.268309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.268341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.268356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.276329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.276380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.284563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.284595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.292775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.292807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.292821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.300916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.300948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.300964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.309034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.309065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.309080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.317291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.317327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.317344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.325581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.325614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.325630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.333673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.333719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.341812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.341843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.341859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.350023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.350060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.350075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.358111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.358143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.358158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.366177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.366209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.366224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.374392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.374424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.374438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.382454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.382486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.382501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.390534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.390566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.390581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.561 [2024-07-16 00:56:04.398600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.561 [2024-07-16 00:56:04.398632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.561 [2024-07-16 00:56:04.398646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.406733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.406765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.406780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.414850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.414881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.414896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.422984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.423015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.423031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.431163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.431194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.439329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.439359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.439373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.447499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.447530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.447545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.455618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.455650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.455664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.463799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.463831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.463846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.471873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.471903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.471918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.480014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.480045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.480060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.488162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.488193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.488212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.496320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.496351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.496365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.504542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.504574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.504588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.512621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.512667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.520787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.520819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.520834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.528887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.528917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.528931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.537049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.537095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.545174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.545204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.545219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.553313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.553344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.553359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.561501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.561537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.561551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.569684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.569715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.569730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.577819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.577851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.577865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.586034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.586065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.586080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.594087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.594118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.602172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.602203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.602217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.610363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.820 [2024-07-16 00:56:04.610393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.820 [2024-07-16 00:56:04.610409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.820 [2024-07-16 00:56:04.618559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.821 [2024-07-16 00:56:04.618591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.821 [2024-07-16 00:56:04.618606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:46.821 [2024-07-16 00:56:04.627002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.821 [2024-07-16 00:56:04.627033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.821 [2024-07-16 00:56:04.627048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:46.821 [2024-07-16 00:56:04.635181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.821 [2024-07-16 00:56:04.635212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.821 [2024-07-16 00:56:04.635227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:46.821 [2024-07-16 00:56:04.643211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.821 [2024-07-16 00:56:04.643243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.821 [2024-07-16 00:56:04.643266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:46.821 [2024-07-16 00:56:04.651927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:46.821 [2024-07-16 00:56:04.651961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.821 [2024-07-16 00:56:04.651976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.078 [2024-07-16 00:56:04.661003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.078 [2024-07-16 00:56:04.661040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.661056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.671023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.671058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.671073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.680890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.680925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.680940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.690496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.690532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.690547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.700912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.700948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.700963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.711938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.711974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.711995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.721950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.721986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.722002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.731933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.731967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.731983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.742291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.742341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.752697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.752731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.752747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.763063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.763114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.774398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.774434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.774449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.785378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.785411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.795283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.795318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.795332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.804937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.804971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.804986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.814957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.814992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.815007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.825545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.825578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.825594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.836108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.836158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.847003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.847038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.847053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.857726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.857762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.079 [2024-07-16 00:56:04.868651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.079 [2024-07-16 00:56:04.868687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.079 [2024-07-16 00:56:04.868702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.080 [2024-07-16 00:56:04.878969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.080 [2024-07-16 00:56:04.879005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.080 [2024-07-16 00:56:04.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.080 [2024-07-16 00:56:04.889575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.080 [2024-07-16 00:56:04.889610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.080 [2024-07-16 00:56:04.889631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.080 [2024-07-16 00:56:04.900139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.080 [2024-07-16 00:56:04.900173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.080 [2024-07-16 00:56:04.900189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.080 [2024-07-16 00:56:04.911268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.080 [2024-07-16 00:56:04.911304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.080 [2024-07-16 00:56:04.911320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.338 [2024-07-16 00:56:04.921107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.338 [2024-07-16 00:56:04.921142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.338 [2024-07-16 00:56:04.921158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.338 [2024-07-16 00:56:04.927691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.927725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.927740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.938998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.939032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.939048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.950808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.950844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.962456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.962489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.962504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.974303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.974354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.985914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.985955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.985970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:04.995840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:04.995876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:04.995892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.007105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.007140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.007156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.018634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.018669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.018685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.029074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.029110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.029126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.041318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.041352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.041367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.052979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.064182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.064218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.064235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.075501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.075537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.085698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.085732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.085749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.095651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.095688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.095704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.106050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.106085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.106101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.116301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.116336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.116351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.126347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.126380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.126395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.135398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.135432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.135446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.144138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.144189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.153131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.153180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.162288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.162322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.162343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.339 [2024-07-16 00:56:05.171522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.339 [2024-07-16 00:56:05.171555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.339 [2024-07-16 00:56:05.171569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.180244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.180287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.180303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.189004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.189038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.189054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.197827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.197860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.197875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.206320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.206353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.206367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.214058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.214095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.214109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.222017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.222053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.222069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.229930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.229966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.229981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.237788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.237829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.237844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.245849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.245883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.245898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.254029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.254062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.254077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.262113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.262150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.262165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.270161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.270196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.270211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.278436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.278472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.278487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.286413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.286447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.286462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.294808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.294841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.598 [2024-07-16 00:56:05.294856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.598 [2024-07-16 00:56:05.303201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.598 [2024-07-16 00:56:05.303234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.303249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.599 [2024-07-16 00:56:05.311456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.599 [2024-07-16 00:56:05.311489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.311503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.599 [2024-07-16 00:56:05.319684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.599 [2024-07-16 00:56:05.319718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:47.599 [2024-07-16 00:56:05.328013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.599 [2024-07-16 00:56:05.328047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.328061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:47.599 [2024-07-16 00:56:05.336345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.599 [2024-07-16 00:56:05.336389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.336406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:47.599 [2024-07-16 00:56:05.344481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6f27c0) 00:29:47.599 [2024-07-16 00:56:05.344515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.599 [2024-07-16 00:56:05.344529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:47.599 00:29:47.599 Latency(us) 00:29:47.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:47.599 nvme0n1 : 2.00 3536.45 442.06 0.00 0.00 4518.59 1199.01 12153.95 00:29:47.599 =================================================================================================================== 00:29:47.599 Total : 3536.45 442.06 0.00 0.00 4518.59 1199.01 12153.95 00:29:47.599 0 00:29:47.599 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:47.599 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:47.599 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:47.599 | .driver_specific 00:29:47.599 | .nvme_error 00:29:47.599 | .status_code 00:29:47.599 | .command_transient_transport_error' 00:29:47.599 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3211912 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3211912 ']' 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3211912 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3211912 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3211912' 00:29:47.857 killing process with pid 3211912 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3211912 00:29:47.857 Received shutdown signal, test time was about 2.000000 seconds 00:29:47.857 00:29:47.857 Latency(us) 00:29:47.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.857 =================================================================================================================== 00:29:47.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.857 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3211912 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3212708 00:29:48.115 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3212708 /var/tmp/bperf.sock 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3212708 ']' 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.116 00:56:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.373 [2024-07-16 00:56:05.960809] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:48.373 [2024-07-16 00:56:05.960871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212708 ] 00:29:48.373 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.373 [2024-07-16 00:56:06.045118] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.373 [2024-07-16 00:56:06.140647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.307 00:56:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.307 00:56:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:49.307 00:56:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.307 00:56:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.565 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.824 nvme0n1 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.824 00:56:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:50.084 Running I/O for 2 seconds... 00:29:50.084 [2024-07-16 00:56:07.701524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.701783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.701825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.716006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.716260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.716295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.730562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.730840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.745227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.745490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.745521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.759721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.759968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.759998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.774249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.774506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.774537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.788677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.788925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.788954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.803162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.803416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.803446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.817635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.817876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.817905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.832101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.832347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.832378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.084 [2024-07-16 00:56:07.846554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.084 [2024-07-16 00:56:07.846798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.084 [2024-07-16 00:56:07.846828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.085 [2024-07-16 00:56:07.861057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.085 [2024-07-16 00:56:07.861299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.085 [2024-07-16 00:56:07.861330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.085 [2024-07-16 00:56:07.875513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.085 [2024-07-16 00:56:07.875755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.085 [2024-07-16 00:56:07.875784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.085 [2024-07-16 00:56:07.890000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.085 [2024-07-16 00:56:07.890240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.085 [2024-07-16 00:56:07.890280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.085 [2024-07-16 00:56:07.904437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.085 [2024-07-16 00:56:07.904680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.085 [2024-07-16 00:56:07.904709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.085 [2024-07-16 00:56:07.918913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.085 [2024-07-16 00:56:07.919152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.085 [2024-07-16 00:56:07.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:07.933411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:07.933656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:07.933686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:07.947865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:07.948112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:07.948142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:07.962368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:07.962609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:07.962639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:07.976829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:07.977073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:07.977103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:07.991285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:07.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:07.991560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.005744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.005985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.006014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.020193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.020449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.020479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.034652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.034897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.034926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.049116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.049366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.049396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.063578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.063818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.063846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.078033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.078280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.078317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.092487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.092729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.092760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.106907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.107150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.107180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.121389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.121629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.345 [2024-07-16 00:56:08.121659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.345 [2024-07-16 00:56:08.135859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.345 [2024-07-16 00:56:08.136104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.346 [2024-07-16 00:56:08.136133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.346 [2024-07-16 00:56:08.150309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.346 [2024-07-16 00:56:08.150553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.346 [2024-07-16 00:56:08.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.346 [2024-07-16 00:56:08.164803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.346 [2024-07-16 00:56:08.165047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.346 [2024-07-16 00:56:08.165077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.346 [2024-07-16 00:56:08.179368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.346 [2024-07-16 00:56:08.179614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.346 [2024-07-16 00:56:08.179644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.193869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.194115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.194148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.208326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.208568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.208599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.223021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.223273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.223302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.237507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.237779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.251937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.252179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.252208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.266432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.266675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.266706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.280907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.281149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.281179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.295358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.295597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.295626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.309825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.310065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.310094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.324292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.324533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.338717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.338961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.338990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.353239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.353492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.353522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.367697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.367965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.382145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.382392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.396602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.396847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.396886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.411063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.411305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.411335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.425540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.425781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.425811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.605 [2024-07-16 00:56:08.439956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.605 [2024-07-16 00:56:08.440200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.605 [2024-07-16 00:56:08.440229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.454479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.454720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.454750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.468933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.469175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.469205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.483406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.483647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.497819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.498061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.498090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.512321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.512564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.512592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.526730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.526977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.527006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.541228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.541478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.541509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.555688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.555927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.555956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.570129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.570380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.570410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.584612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.584853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.584883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.599050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.599296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.599328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.613509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.613750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.613781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.628009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.628262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.628291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.642670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.642912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.642941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.657233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.657487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.657516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.671712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.671957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.671987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.686140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.686390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:50.864 [2024-07-16 00:56:08.700607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:50.864 [2024-07-16 00:56:08.700853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:50.864 [2024-07-16 00:56:08.700883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.715099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.715353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.715382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.729535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.729777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.729807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.744000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.744240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.744275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.758441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.758682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.758711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.772884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.773125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.773160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.787342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.787587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.787616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.801753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.802025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.816204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.816466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.816497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.830640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.830884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.830913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.845073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.845322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.845351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.859546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.859788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.144 [2024-07-16 00:56:08.859818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.144 [2024-07-16 00:56:08.873972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.144 [2024-07-16 00:56:08.874211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.888399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.888639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.888667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.902855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.903100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.903128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.917288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.917532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.931753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.931995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.932024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.946168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.946417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.946448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.145 [2024-07-16 00:56:08.960596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.145 [2024-07-16 00:56:08.960838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.145 [2024-07-16 00:56:08.960867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.457 [2024-07-16 00:56:08.975083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.457 [2024-07-16 00:56:08.975333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:08.975363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:08.989521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:08.989763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:08.989793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.003991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.004236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.004274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.018435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.018678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.018709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.032863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.033110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.033141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.047339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.047609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.061740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.061982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.062011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.076197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.076448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.076478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.090632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.090875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.090904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.105099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.105375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.119515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.119756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.119785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.134008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.134252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.134289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.148401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.148672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.162915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.163159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.163188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.177322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.177567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.177597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.191934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.192178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.192208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.206399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.206644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.206673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.221079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.221328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.221358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.235540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.235786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.235816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.249997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.250243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.250279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.264463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.264704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.264735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.278918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.279161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.279196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.458 [2024-07-16 00:56:09.293365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.458 [2024-07-16 00:56:09.293609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.458 [2024-07-16 00:56:09.293641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.723 [2024-07-16 00:56:09.307811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.723 [2024-07-16 00:56:09.308058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.723 [2024-07-16 00:56:09.308087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.723 [2024-07-16 00:56:09.322285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.723 [2024-07-16 00:56:09.322530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.723 [2024-07-16 00:56:09.322565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.723 [2024-07-16 00:56:09.336742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.723 [2024-07-16 00:56:09.336983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.723 [2024-07-16 00:56:09.337013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.723 [2024-07-16 00:56:09.351165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.723 [2024-07-16 00:56:09.351417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.365640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.365882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.365911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.380066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.380309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.380339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.394544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.394786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.394816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.408983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.409230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.409265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.423438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.423683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.423713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.437880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.438120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.438151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.452342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.452586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.452615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.466767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.467007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.467037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.481227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.481477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.481506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.495627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.495870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.495900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.510093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.510340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.510369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.524529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.524771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.524799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.538981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.539224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.539253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.724 [2024-07-16 00:56:09.553471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.724 [2024-07-16 00:56:09.553715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.724 [2024-07-16 00:56:09.553744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.567912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.568152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.568182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.582357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.582599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.582628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.596841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.597082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.597111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.611264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.611508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.611538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.625730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.625973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.626002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.640172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.640427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.640457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.654627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.654871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.654900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.669075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.669328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.669358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 [2024-07-16 00:56:09.683527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc00f60) with pdu=0x2000190fda78 00:29:51.989 [2024-07-16 00:56:09.683770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.989 [2024-07-16 00:56:09.683800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:51.989 00:29:51.989 Latency(us) 00:29:51.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.989 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.989 nvme0n1 : 2.01 17594.97 68.73 0.00 0.00 7255.78 6613.18 15966.95 00:29:51.989 =================================================================================================================== 00:29:51.989 Total : 17594.97 68.73 0.00 0.00 7255.78 6613.18 15966.95 00:29:51.989 0 00:29:51.989 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.989 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.990 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.990 | .driver_specific 00:29:51.990 | .nvme_error 00:29:51.990 | .status_code 00:29:51.990 | .command_transient_transport_error' 00:29:51.990 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3212708 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3212708 ']' 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3212708 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:52.248 00:56:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3212708 00:29:52.248 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:52.248 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:52.248 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3212708' 00:29:52.248 killing process with pid 3212708 00:29:52.248 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3212708 00:29:52.248 Received shutdown signal, test time was about 2.000000 seconds 00:29:52.248 00:29:52.248 Latency(us) 00:29:52.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.248 =================================================================================================================== 00:29:52.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.248 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3212708 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3213510 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3213510 /var/tmp/bperf.sock 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3213510 ']' 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.507 00:56:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.507 [2024-07-16 00:56:10.306776] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:29:52.507 [2024-07-16 00:56:10.306837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213510 ] 00:29:52.507 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:52.507 Zero copy mechanism will not be used. 00:29:52.507 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.766 [2024-07-16 00:56:10.389530] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.766 [2024-07-16 00:56:10.494407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.701 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:54.272 nvme0n1 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:54.272 00:56:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:54.272 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:54.272 Zero copy mechanism will not be used. 00:29:54.272 Running I/O for 2 seconds... 00:29:54.272 [2024-07-16 00:56:11.975137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:11.975710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:11.975752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:11.983932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:11.984467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:11.984503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:11.992711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:11.993273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:11.993307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.000282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.000794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.000827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.007574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.008100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.008133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.014914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.015466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.023542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.024095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.031485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.032055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.039170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.039711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.272 [2024-07-16 00:56:12.039742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.272 [2024-07-16 00:56:12.046568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.272 [2024-07-16 00:56:12.047109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.047140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.053540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.054066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.054098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.060279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.060807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.060838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.066869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.067399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.067430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.073387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.073905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.073936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.080551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.081074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.087022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.087567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.087598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.094112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.094635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.094666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.101119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.101639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.101671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.273 [2024-07-16 00:56:12.107937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.273 [2024-07-16 00:56:12.108463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.273 [2024-07-16 00:56:12.108494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.114935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.115472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.115503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.123182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.123746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.123777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.132238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.132767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.132797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.141146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.141674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.150654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.151184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.151216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.160529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.161054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.170482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.171035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.171067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.180353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.180929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.190862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.191394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.191424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.201078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.201651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.201683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.211086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.211783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.211814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.220000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.220526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.230026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.230571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.230601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.239511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.240038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.240068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.249226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.249792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.249823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.258400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.258920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.258950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.266766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.267291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.276548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.277066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.277098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.284967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.285500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.285531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.293187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.293721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.535 [2024-07-16 00:56:12.293752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.535 [2024-07-16 00:56:12.301384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.535 [2024-07-16 00:56:12.301943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.301975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.310275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.310836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.310868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.318492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.319017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.319048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.326181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.326703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.334011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.334534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.334565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.342299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.342785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.342815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.349982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.350498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.350530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.357135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.357651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.357684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.363950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.364484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.364514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.536 [2024-07-16 00:56:12.370752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.536 [2024-07-16 00:56:12.371265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.536 [2024-07-16 00:56:12.371296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.377395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.377916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.377947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.383833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.384388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.390499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.391014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.391044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.396978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.397524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.397555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.403534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.404059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.409967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.410504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.410536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.416440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.416998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.422832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.423362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.423393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.429286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.429806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.435834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.436350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.436382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.442283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.442817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.448686] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.449216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.449246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.455298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.455831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.462634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.463158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.463189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.470004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.470522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.470552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.476674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.477180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.477211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.483370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.483899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.483929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.490041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.490573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.490604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.496571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.497095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.497126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.502938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.503464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.503496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.509316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.509838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.509869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.515650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.516163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.516194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.522087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.522630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.522660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.528892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.529426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.797 [2024-07-16 00:56:12.529457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.797 [2024-07-16 00:56:12.535242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.797 [2024-07-16 00:56:12.535778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.535809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.541625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.542150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.542181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.547939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.548459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.548490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.554281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.554797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.554833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.560612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.561131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.561163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.567132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.567646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.567677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.574549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.575057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.575087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.583421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.583991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.584021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.592738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.593277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.593307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.602163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.602719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.602750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.611183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.611699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.611730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.619891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.620432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.620463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:54.798 [2024-07-16 00:56:12.628538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:54.798 [2024-07-16 00:56:12.629074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.798 [2024-07-16 00:56:12.629105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.059 [2024-07-16 00:56:12.637079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.637199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.645863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.646478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.646509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.654018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.654598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.654628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.662594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.663117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.671010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.671507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.671538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.678489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.678989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.679019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.687402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.687925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.695640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.696104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.696133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.702767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.703241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.710220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.710700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.717512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.717966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.725158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.725637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.725668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.731827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.732306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.732336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.738358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.738828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.738859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.744541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.745012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.745043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.750803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.751307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.756859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.757332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.757369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.762941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.763403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.763436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.768940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.769392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.769424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.775039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.775514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.781786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.782328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.782359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.789244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.789767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.789798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.797595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.798168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.798198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.806615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.807151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.807181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.815999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.816609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.816640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.825606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.826143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.826175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.835542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.836096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.836127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.845345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.845917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.845947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.060 [2024-07-16 00:56:12.854692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.060 [2024-07-16 00:56:12.855219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.060 [2024-07-16 00:56:12.855249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.061 [2024-07-16 00:56:12.864696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.061 [2024-07-16 00:56:12.865215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.061 [2024-07-16 00:56:12.865245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.061 [2024-07-16 00:56:12.874158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.061 [2024-07-16 00:56:12.874712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.061 [2024-07-16 00:56:12.874742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.061 [2024-07-16 00:56:12.883017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.061 [2024-07-16 00:56:12.883476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.061 [2024-07-16 00:56:12.883507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.061 [2024-07-16 00:56:12.891399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.061 [2024-07-16 00:56:12.891869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.061 [2024-07-16 00:56:12.891899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.899533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.900015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.900050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.907028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.907500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.907530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.914779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.915225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.915264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.922570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.923021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.923051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.930398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.930865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.930895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.937318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.937789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.937820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.943850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.944312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.944343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.950680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.951131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.951162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.956814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.957252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.957291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.963101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.963594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.963625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.969466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.969944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.969975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.977432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.977903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.977934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.984369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.984831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.319 [2024-07-16 00:56:12.984862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.319 [2024-07-16 00:56:12.990668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.319 [2024-07-16 00:56:12.991122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:12.991153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:12.996912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:12.997388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:12.997420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.003164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.003636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.010307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.010775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.010805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.017410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.017881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.017912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.023861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.024326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.024357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.030006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.030478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.030508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.036150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.036608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.036640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.042295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.042769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.049556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.050006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.050036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.056327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.056798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.056828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.063491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.063960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.063991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.071734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.072195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.072232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.078078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.078534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.078572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.084334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.084799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.084830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.090556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.091021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.091052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.097086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.097556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.097587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.103200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.103658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.103688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.109244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.109704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.109734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.115307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.115758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.115789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.121354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.121821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.121852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.127402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.127866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.127897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.133460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.133926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.133957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.139551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.140018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.140048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.145668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.146127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.151858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.320 [2024-07-16 00:56:13.152315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.320 [2024-07-16 00:56:13.152346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.320 [2024-07-16 00:56:13.157951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.158418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.158449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.163968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.164436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.164466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.170017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.170487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.170518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.176042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.176541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.176573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.182230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.182666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.182697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.188618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.189058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.189088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.195597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.578 [2024-07-16 00:56:13.196033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.578 [2024-07-16 00:56:13.196064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.578 [2024-07-16 00:56:13.202662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.203217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.211229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.211940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.211971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.219735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.220222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.220261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.228220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.228763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.228793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.237132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.237662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.237691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.245612] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.246060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.246091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.254046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.254605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.254641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.262557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.262982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.263012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.270679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.271242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.279147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.279639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.279669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.288725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.289158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.289190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.297287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.297718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.297748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.304894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.305359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.305389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.312410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.312842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.312872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.319973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.320425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.320456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.327729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.328235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.328276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.335566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.336049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.336079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.343890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.344391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.344422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.352409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.352914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.352945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.360866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.361293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.361323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.369569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.370093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.370124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.378107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.378608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.378638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.386040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.386458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.386490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.392753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.393168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.393199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.398812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.399211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.399242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.404146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.404551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.409269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.409629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.579 [2024-07-16 00:56:13.414424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.579 [2024-07-16 00:56:13.414784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.579 [2024-07-16 00:56:13.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.419685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.420053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.420082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.426176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.426552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.431491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.431858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.431888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.436771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.437138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.437168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.441918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.442286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.442323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.447135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.447502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.447533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.452281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.452640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.452669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.457466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.457839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.462579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.462934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.462964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.467760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.468113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.468143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.472860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.473194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.473225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.477995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.478350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.478380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.483127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.483478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.483509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.488274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.488630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.488660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.493410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.493752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.493783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.498549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.498898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.498928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.503654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.503994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.504024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.508852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.509199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.509229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.514173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.514515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.514546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.519328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.519703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.524489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.524857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.529683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.530032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.530062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.535753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.536210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.536239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.542874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.543223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.543262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.548403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.548759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.838 [2024-07-16 00:56:13.548789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.838 [2024-07-16 00:56:13.553647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.838 [2024-07-16 00:56:13.553980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.554011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.558930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.559289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.559320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.564239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.564592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.571105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.571544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.571575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.577729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.578104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.578134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.583884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.584244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.584288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.589778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.590130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.590161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.596216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.596583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.596616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.601644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.601992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.602024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.606911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.607246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.607284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.612108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.612469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.612500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.617360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.617702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.617733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.622535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.622876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.622907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.628069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.628396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.628427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.634535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.634873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.634907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.640056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.640414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.640444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.645358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.645717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.650608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.650955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.650985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.655936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.656304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.656334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.661180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.661541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.661571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.666370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.666734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:55.839 [2024-07-16 00:56:13.671538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:55.839 [2024-07-16 00:56:13.671907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.839 [2024-07-16 00:56:13.671937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.676775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.677133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.677164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.682234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.682601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.682631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.687826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.688183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.688214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.693058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.693396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.693427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.698334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.698706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.698740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.703592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.703938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.703968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.708961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.098 [2024-07-16 00:56:13.709323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.098 [2024-07-16 00:56:13.709354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.098 [2024-07-16 00:56:13.714133] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.714509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.714541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.719340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.719689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.719719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.724689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.725040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.725074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.730042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.730388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.730419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.735528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.735885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.735915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.740769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.741128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.741158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.746116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.746482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.746513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.751760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.752119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.752150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.757199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.757559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.757589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.762670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.763018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.763048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.767878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.768206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.768237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.773230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.773588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.773618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.778944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.779268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.779299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.784878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.785200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.790130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.790462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.790491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.795408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.795726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.795757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.801039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.801378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.806423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.806766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.812675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.813127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.818494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.818820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.818850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.824888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.825231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.825269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.830777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.831106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.831137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.836177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.836496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.836526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.841393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.841703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.841733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.846607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.846925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.846955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.852576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.852891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.852922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.858829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.859135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.859166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.864384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.864693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.864723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.869701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.870026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.870062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.099 [2024-07-16 00:56:13.874998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.099 [2024-07-16 00:56:13.875359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.099 [2024-07-16 00:56:13.875390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.880238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.880553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.880583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.885530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.885850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.885881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.890794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.891120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.895979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.896302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.896333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.901235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.901580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.901610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.906445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.906738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.906768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.911623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.911931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.911962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.916810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.917132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.917162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.922027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.922349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.922380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.927445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.927796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.927826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.100 [2024-07-16 00:56:13.933107] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.100 [2024-07-16 00:56:13.933442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.100 [2024-07-16 00:56:13.933472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.358 [2024-07-16 00:56:13.938685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.358 [2024-07-16 00:56:13.939009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.358 [2024-07-16 00:56:13.939040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.358 [2024-07-16 00:56:13.943922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.358 [2024-07-16 00:56:13.944241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.358 [2024-07-16 00:56:13.944280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:56.358 [2024-07-16 00:56:13.949167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.358 [2024-07-16 00:56:13.949487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.358 [2024-07-16 00:56:13.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:56.358 [2024-07-16 00:56:13.954380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.358 [2024-07-16 00:56:13.954717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.358 [2024-07-16 00:56:13.954748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.358 [2024-07-16 00:56:13.959637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc01130) with pdu=0x2000190fef90 00:29:56.358 [2024-07-16 00:56:13.959975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.358 [2024-07-16 00:56:13.960010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:56.358 00:29:56.358 Latency(us) 00:29:56.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.358 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:56.358 nvme0n1 : 2.00 4547.53 568.44 0.00 0.00 3510.10 2383.13 14537.08 00:29:56.358 =================================================================================================================== 00:29:56.358 Total : 4547.53 568.44 0.00 0.00 3510.10 2383.13 14537.08 00:29:56.358 0 00:29:56.358 00:56:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:56.358 00:56:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:56.358 00:56:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:56.358 | .driver_specific 00:29:56.358 | .nvme_error 00:29:56.358 | .status_code 00:29:56.358 | .command_transient_transport_error' 00:29:56.358 00:56:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 293 > 0 )) 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3213510 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3213510 ']' 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3213510 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3213510 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3213510' 00:29:56.617 killing process with pid 3213510 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3213510 00:29:56.617 Received shutdown signal, test time was about 2.000000 seconds 00:29:56.617 00:29:56.617 Latency(us) 00:29:56.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.617 =================================================================================================================== 00:29:56.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:56.617 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3213510 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3210919 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3210919 ']' 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3210919 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3210919 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3210919' 00:29:56.876 killing process with pid 3210919 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3210919 00:29:56.876 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3210919 00:29:57.134 00:29:57.134 real 0m18.710s 00:29:57.134 user 0m37.496s 00:29:57.134 sys 0m4.465s 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.134 ************************************ 00:29:57.134 END TEST nvmf_digest_error 00:29:57.134 ************************************ 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.134 rmmod nvme_tcp 00:29:57.134 rmmod nvme_fabrics 00:29:57.134 rmmod nvme_keyring 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3210919 ']' 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3210919 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3210919 ']' 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3210919 00:29:57.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3210919) - No such process 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3210919 is not found' 00:29:57.134 Process with pid 3210919 is not found 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.134 00:56:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.668 00:56:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:59.669 00:29:59.669 real 0m45.676s 00:29:59.669 user 1m16.835s 00:29:59.669 sys 0m13.265s 00:29:59.669 00:56:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:59.669 00:56:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:59.669 ************************************ 00:29:59.669 END TEST nvmf_digest 00:29:59.669 ************************************ 00:29:59.669 00:56:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:59.669 00:56:16 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:59.669 00:56:16 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:59.669 00:56:16 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:59.669 00:56:16 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:59.669 00:56:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:59.669 00:56:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:59.669 00:56:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.669 ************************************ 00:29:59.669 START TEST nvmf_bdevperf 00:29:59.669 ************************************ 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:59.669 * Looking for test storage... 00:29:59.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.669 00:56:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.937 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.937 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.937 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.938 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.938 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:05.197 00:30:05.197 --- 10.0.0.2 ping statistics --- 00:30:05.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.197 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:30:05.197 00:30:05.197 --- 10.0.0.1 ping statistics --- 00:30:05.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.197 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3217779 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3217779 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3217779 ']' 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:05.197 00:56:22 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:05.197 [2024-07-16 00:56:23.013459] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:05.197 [2024-07-16 00:56:23.013522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.462 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.462 [2024-07-16 00:56:23.103953] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:05.462 [2024-07-16 00:56:23.210003] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.462 [2024-07-16 00:56:23.210052] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.462 [2024-07-16 00:56:23.210065] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.462 [2024-07-16 00:56:23.210076] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.462 [2024-07-16 00:56:23.210085] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.462 [2024-07-16 00:56:23.210156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.462 [2024-07-16 00:56:23.210291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.462 [2024-07-16 00:56:23.210295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.400 00:56:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.400 [2024-07-16 00:56:24.004860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.400 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.401 Malloc0 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:06.401 [2024-07-16 00:56:24.070962] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.401 { 00:30:06.401 "params": { 00:30:06.401 "name": "Nvme$subsystem", 00:30:06.401 "trtype": "$TEST_TRANSPORT", 00:30:06.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.401 "adrfam": "ipv4", 00:30:06.401 "trsvcid": "$NVMF_PORT", 00:30:06.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.401 "hdgst": ${hdgst:-false}, 00:30:06.401 "ddgst": ${ddgst:-false} 00:30:06.401 }, 00:30:06.401 "method": "bdev_nvme_attach_controller" 00:30:06.401 } 00:30:06.401 EOF 00:30:06.401 )") 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:06.401 00:56:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:06.401 "params": { 00:30:06.401 "name": "Nvme1", 00:30:06.401 "trtype": "tcp", 00:30:06.401 "traddr": "10.0.0.2", 00:30:06.401 "adrfam": "ipv4", 00:30:06.401 "trsvcid": "4420", 00:30:06.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.401 "hdgst": false, 00:30:06.401 "ddgst": false 00:30:06.401 }, 00:30:06.401 "method": "bdev_nvme_attach_controller" 00:30:06.401 }' 00:30:06.401 [2024-07-16 00:56:24.124055] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:06.401 [2024-07-16 00:56:24.124111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218060 ] 00:30:06.401 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.401 [2024-07-16 00:56:24.204379] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.660 [2024-07-16 00:56:24.291188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.660 Running I/O for 1 seconds... 00:30:08.040 00:30:08.040 Latency(us) 00:30:08.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.040 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.040 Verification LBA range: start 0x0 length 0x4000 00:30:08.040 Nvme1n1 : 1.02 6204.19 24.24 0.00 0.00 20522.05 3098.07 18588.39 00:30:08.040 =================================================================================================================== 00:30:08.040 Total : 6204.19 24.24 0.00 0.00 20522.05 3098.07 18588.39 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3218325 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.040 { 00:30:08.040 "params": { 00:30:08.040 "name": "Nvme$subsystem", 00:30:08.040 "trtype": "$TEST_TRANSPORT", 00:30:08.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.040 "adrfam": "ipv4", 00:30:08.040 "trsvcid": "$NVMF_PORT", 00:30:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.040 "hdgst": ${hdgst:-false}, 00:30:08.040 "ddgst": ${ddgst:-false} 00:30:08.040 }, 00:30:08.040 "method": "bdev_nvme_attach_controller" 00:30:08.040 } 00:30:08.040 EOF 00:30:08.040 )") 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:08.040 00:56:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.040 "params": { 00:30:08.040 "name": "Nvme1", 00:30:08.040 "trtype": "tcp", 00:30:08.040 "traddr": "10.0.0.2", 00:30:08.040 "adrfam": "ipv4", 00:30:08.040 "trsvcid": "4420", 00:30:08.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:08.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:08.040 "hdgst": false, 00:30:08.040 "ddgst": false 00:30:08.040 }, 00:30:08.040 "method": "bdev_nvme_attach_controller" 00:30:08.040 }' 00:30:08.040 [2024-07-16 00:56:25.764874] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:08.040 [2024-07-16 00:56:25.764936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218325 ] 00:30:08.040 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.040 [2024-07-16 00:56:25.846247] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.299 [2024-07-16 00:56:25.927909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.558 Running I/O for 15 seconds... 00:30:11.096 00:56:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3217779 00:30:11.096 00:56:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:11.096 [2024-07-16 00:56:28.732421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.732663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.732982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.733969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.733980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.096 [2024-07-16 00:56:28.733989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.734002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.734012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.734033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.734047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.734056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.096 [2024-07-16 00:56:28.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.096 [2024-07-16 00:56:28.734078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.734865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.734982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.734992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:11.097 [2024-07-16 00:56:28.735013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.097 [2024-07-16 00:56:28.735348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe73b0 is same with the state(5) to be set 00:30:11.097 [2024-07-16 00:56:28.735372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:11.097 [2024-07-16 00:56:28.735379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:11.097 [2024-07-16 00:56:28.735387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124528 len:8 PRP1 0x0 PRP2 0x0 00:30:11.097 [2024-07-16 00:56:28.735396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735445] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe73b0 was disconnected and freed. reset controller. 00:30:11.097 [2024-07-16 00:56:28.735499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.097 [2024-07-16 00:56:28.735514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.097 [2024-07-16 00:56:28.735536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.097 [2024-07-16 00:56:28.735556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:11.097 [2024-07-16 00:56:28.735576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:11.097 [2024-07-16 00:56:28.735586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.097 [2024-07-16 00:56:28.740052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.097 [2024-07-16 00:56:28.740086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.097 [2024-07-16 00:56:28.740820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-07-16 00:56:28.740842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.097 [2024-07-16 00:56:28.740853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.097 [2024-07-16 00:56:28.741117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.097 [2024-07-16 00:56:28.741388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.097 [2024-07-16 00:56:28.741400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.097 [2024-07-16 00:56:28.741411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.097 [2024-07-16 00:56:28.745655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.097 [2024-07-16 00:56:28.754933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.097 [2024-07-16 00:56:28.755505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.097 [2024-07-16 00:56:28.755529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.097 [2024-07-16 00:56:28.755541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.755807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.756073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.756086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.756096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.760351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.769631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.770177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.770200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.770211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.770496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.770764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.770777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.770787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.775027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.784298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.784798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.784840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.784862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.785457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.786006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.786019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.786029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.790277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.799046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.799621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.799664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.799687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.800152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.800424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.800437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.800446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.804697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.813714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.814187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.814230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.814252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.814854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.815226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.815243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.815266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.821500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.829073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.829605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.829648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.829670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.830218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.830492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.830505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.830515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.834755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.843770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.844243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.844271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.844282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.844546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.844811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.844823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.844833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.849077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.858345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.858929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.858971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.858993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.859585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.860166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.860191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.860211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.864486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.873019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.873583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.873606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.873616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.873880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.874146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.874159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.874168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.878412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.887685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.888266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.888308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.888330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.888908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.889251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.889270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.889280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.893516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.902277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.902847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.902889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.902911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.903431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.903697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.903710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.903720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.907956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.098 [2024-07-16 00:56:28.916969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.098 [2024-07-16 00:56:28.917518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.098 [2024-07-16 00:56:28.917561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.098 [2024-07-16 00:56:28.917590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.098 [2024-07-16 00:56:28.918168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.098 [2024-07-16 00:56:28.918604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.098 [2024-07-16 00:56:28.918617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.098 [2024-07-16 00:56:28.918627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.098 [2024-07-16 00:56:28.922864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:28.931634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:28.932177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:28.932199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:28.932210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:28.932481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.358 [2024-07-16 00:56:28.932748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.358 [2024-07-16 00:56:28.932761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.358 [2024-07-16 00:56:28.932770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.358 [2024-07-16 00:56:28.937018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:28.946295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:28.946787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:28.946809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:28.946819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:28.947084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.358 [2024-07-16 00:56:28.947356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.358 [2024-07-16 00:56:28.947369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.358 [2024-07-16 00:56:28.947378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.358 [2024-07-16 00:56:28.951618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:28.960880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:28.961354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:28.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:28.961386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:28.961650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.358 [2024-07-16 00:56:28.961920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.358 [2024-07-16 00:56:28.961933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.358 [2024-07-16 00:56:28.961943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.358 [2024-07-16 00:56:28.966187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:28.975467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:28.976030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:28.976052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:28.976062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:28.976334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.358 [2024-07-16 00:56:28.976601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.358 [2024-07-16 00:56:28.976613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.358 [2024-07-16 00:56:28.976622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.358 [2024-07-16 00:56:28.980863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:28.990125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:28.990723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:28.990766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:28.990788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:28.991380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.358 [2024-07-16 00:56:28.991661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.358 [2024-07-16 00:56:28.991673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.358 [2024-07-16 00:56:28.991683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.358 [2024-07-16 00:56:28.995929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.358 [2024-07-16 00:56:29.004701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.358 [2024-07-16 00:56:29.005293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.358 [2024-07-16 00:56:29.005337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.358 [2024-07-16 00:56:29.005359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.358 [2024-07-16 00:56:29.005936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.006529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.006556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.006577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.010914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.019436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.019971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.019993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.020004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.020275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.020542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.020555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.020565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.024805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.034072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.034530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.034551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.034562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.034826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.035091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.035104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.035113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.039359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.048622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.049101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.049141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.049163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.049702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.049969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.049981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.049991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.054236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.063260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.063832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.063904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.064473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.064740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.064753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.064762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.068998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.078021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.078592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.078635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.078657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.079234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.079731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.079744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.079753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.083989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.092745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.093287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.093330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.093352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.093705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.093970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.093983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.093993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.098240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.107509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.107993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.108015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.108025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.108297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.108563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.108576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.108590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.112832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.122098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.122671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.122714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.122736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.123326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.123834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.123846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.123856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.128095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.136875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.137388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.137410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.137420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.137684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.137950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.137963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.137972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.142214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.151484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.152055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.152099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.359 [2024-07-16 00:56:29.152120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.359 [2024-07-16 00:56:29.152713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.359 [2024-07-16 00:56:29.153018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.359 [2024-07-16 00:56:29.153031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.359 [2024-07-16 00:56:29.153041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.359 [2024-07-16 00:56:29.157290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.359 [2024-07-16 00:56:29.166073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.359 [2024-07-16 00:56:29.166618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.359 [2024-07-16 00:56:29.166640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.360 [2024-07-16 00:56:29.166651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.360 [2024-07-16 00:56:29.166914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.360 [2024-07-16 00:56:29.167179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.360 [2024-07-16 00:56:29.167191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.360 [2024-07-16 00:56:29.167201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.360 [2024-07-16 00:56:29.171446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.360 [2024-07-16 00:56:29.180716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.360 [2024-07-16 00:56:29.181276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.360 [2024-07-16 00:56:29.181319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.360 [2024-07-16 00:56:29.181341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.360 [2024-07-16 00:56:29.181919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.360 [2024-07-16 00:56:29.182206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.360 [2024-07-16 00:56:29.182218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.360 [2024-07-16 00:56:29.182228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.360 [2024-07-16 00:56:29.186470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.360 [2024-07-16 00:56:29.195480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.619 [2024-07-16 00:56:29.196020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.619 [2024-07-16 00:56:29.196071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.619 [2024-07-16 00:56:29.196093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.619 [2024-07-16 00:56:29.196688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.619 [2024-07-16 00:56:29.197191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.619 [2024-07-16 00:56:29.197203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.619 [2024-07-16 00:56:29.197213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.619 [2024-07-16 00:56:29.201461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.619 [2024-07-16 00:56:29.210221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.619 [2024-07-16 00:56:29.210787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.619 [2024-07-16 00:56:29.210808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.619 [2024-07-16 00:56:29.210819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.619 [2024-07-16 00:56:29.211086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.619 [2024-07-16 00:56:29.211358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.619 [2024-07-16 00:56:29.211372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.619 [2024-07-16 00:56:29.211381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.619 [2024-07-16 00:56:29.215869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.619 [2024-07-16 00:56:29.224892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.619 [2024-07-16 00:56:29.225476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.619 [2024-07-16 00:56:29.225523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.619 [2024-07-16 00:56:29.225546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.619 [2024-07-16 00:56:29.226096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.619 [2024-07-16 00:56:29.226367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.619 [2024-07-16 00:56:29.226380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.619 [2024-07-16 00:56:29.226391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.619 [2024-07-16 00:56:29.230636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.619 [2024-07-16 00:56:29.239668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.619 [2024-07-16 00:56:29.240125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.619 [2024-07-16 00:56:29.240152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.619 [2024-07-16 00:56:29.240163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.619 [2024-07-16 00:56:29.240437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.619 [2024-07-16 00:56:29.240704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.619 [2024-07-16 00:56:29.240717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.619 [2024-07-16 00:56:29.240728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.619 [2024-07-16 00:56:29.244971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.619 [2024-07-16 00:56:29.254239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.619 [2024-07-16 00:56:29.254809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.619 [2024-07-16 00:56:29.254832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.619 [2024-07-16 00:56:29.254842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.255107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.255378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.255391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.255401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.259648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.268916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.269375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.269407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.269418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.269685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.269954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.269967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.269977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.274279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.283648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.284191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.284214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.284225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.284501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.284771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.284784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.284793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.289079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.298362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.298932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.298997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.299512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.299780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.299792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.299803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.304048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.313075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.313555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.313581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.313592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.313857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.314123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.314135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.314145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.318398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.327660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.328181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.328223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.328245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.328843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.329337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.329350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.329360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.333607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.342362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.342895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.342917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.342927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.343192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.343463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.343477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.343486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.347733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.357009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.357580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.357622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.357644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.358183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.358460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.358473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.358483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.362719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.371754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.372317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.372338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.372349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.372614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.372879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.372892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.372902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.377144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.386419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.386991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.387034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.387056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.387502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.387768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.387781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.387790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.392033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.401063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.401633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.401655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.620 [2024-07-16 00:56:29.401666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.620 [2024-07-16 00:56:29.401930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.620 [2024-07-16 00:56:29.402195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.620 [2024-07-16 00:56:29.402207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.620 [2024-07-16 00:56:29.402216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.620 [2024-07-16 00:56:29.406461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.620 [2024-07-16 00:56:29.415725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.620 [2024-07-16 00:56:29.416298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.620 [2024-07-16 00:56:29.416341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.621 [2024-07-16 00:56:29.416363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.621 [2024-07-16 00:56:29.416940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.621 [2024-07-16 00:56:29.417398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.621 [2024-07-16 00:56:29.417411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.621 [2024-07-16 00:56:29.417421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.621 [2024-07-16 00:56:29.421660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.621 [2024-07-16 00:56:29.430436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.621 [2024-07-16 00:56:29.430918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.621 [2024-07-16 00:56:29.430939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.621 [2024-07-16 00:56:29.430949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.621 [2024-07-16 00:56:29.431214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.621 [2024-07-16 00:56:29.431487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.621 [2024-07-16 00:56:29.431501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.621 [2024-07-16 00:56:29.431511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.621 [2024-07-16 00:56:29.435752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.621 [2024-07-16 00:56:29.445024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.621 [2024-07-16 00:56:29.445589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.621 [2024-07-16 00:56:29.445632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.621 [2024-07-16 00:56:29.445654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.621 [2024-07-16 00:56:29.446232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.621 [2024-07-16 00:56:29.446797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.621 [2024-07-16 00:56:29.446811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.621 [2024-07-16 00:56:29.446820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.621 [2024-07-16 00:56:29.451064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.880 [2024-07-16 00:56:29.459585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.880 [2024-07-16 00:56:29.460158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.880 [2024-07-16 00:56:29.460200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.880 [2024-07-16 00:56:29.460229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.880 [2024-07-16 00:56:29.460799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.880 [2024-07-16 00:56:29.461066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.880 [2024-07-16 00:56:29.461078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.880 [2024-07-16 00:56:29.461088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.880 [2024-07-16 00:56:29.465342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.880 [2024-07-16 00:56:29.474381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.880 [2024-07-16 00:56:29.474948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.880 [2024-07-16 00:56:29.474970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.880 [2024-07-16 00:56:29.474980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.880 [2024-07-16 00:56:29.475245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.880 [2024-07-16 00:56:29.475518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.880 [2024-07-16 00:56:29.475531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.880 [2024-07-16 00:56:29.475541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.880 [2024-07-16 00:56:29.479781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.880 [2024-07-16 00:56:29.489041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.880 [2024-07-16 00:56:29.489600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.880 [2024-07-16 00:56:29.489622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.880 [2024-07-16 00:56:29.489633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.880 [2024-07-16 00:56:29.489897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.880 [2024-07-16 00:56:29.490162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.880 [2024-07-16 00:56:29.490175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.880 [2024-07-16 00:56:29.490185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.880 [2024-07-16 00:56:29.494429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.880 [2024-07-16 00:56:29.503699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.880 [2024-07-16 00:56:29.504266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.880 [2024-07-16 00:56:29.504306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.880 [2024-07-16 00:56:29.504329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.880 [2024-07-16 00:56:29.504894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.880 [2024-07-16 00:56:29.505160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.880 [2024-07-16 00:56:29.505172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.505185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.509439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.518459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.519028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.519050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.519060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.519332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.519598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.519611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.519620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.523866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.533133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.533677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.533720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.533741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.534335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.534901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.534918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.534930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.540783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.548199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.548695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.548718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.548729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.548993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.549265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.549278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.549287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.553537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.562809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.563352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.563373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.563384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.563647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.563912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.563925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.563934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.568179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.577468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.578039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.578080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.578103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.578688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.578956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.578968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.578978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.583222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.592245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.592734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.592757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.592767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.593031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.593303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.593317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.593328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.597573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.606850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.607342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.607365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.607375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.607643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.607909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.607923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.607933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.612177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.621467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.622027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.622049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.622060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.622340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.622607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.622620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.622630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.626876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.636148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.636620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.881 [2024-07-16 00:56:29.636642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.881 [2024-07-16 00:56:29.636653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.881 [2024-07-16 00:56:29.636917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.881 [2024-07-16 00:56:29.637183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.881 [2024-07-16 00:56:29.637195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.881 [2024-07-16 00:56:29.637205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.881 [2024-07-16 00:56:29.641455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.881 [2024-07-16 00:56:29.650723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.881 [2024-07-16 00:56:29.651210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.882 [2024-07-16 00:56:29.651232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.882 [2024-07-16 00:56:29.651242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.882 [2024-07-16 00:56:29.651513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.882 [2024-07-16 00:56:29.651780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.882 [2024-07-16 00:56:29.651793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.882 [2024-07-16 00:56:29.651806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.882 [2024-07-16 00:56:29.656046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.882 [2024-07-16 00:56:29.665326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.882 [2024-07-16 00:56:29.665840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.882 [2024-07-16 00:56:29.665862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.882 [2024-07-16 00:56:29.665872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.882 [2024-07-16 00:56:29.666136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.882 [2024-07-16 00:56:29.666408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.882 [2024-07-16 00:56:29.666421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.882 [2024-07-16 00:56:29.666430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.882 [2024-07-16 00:56:29.670695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.882 [2024-07-16 00:56:29.679977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.882 [2024-07-16 00:56:29.680496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.882 [2024-07-16 00:56:29.680518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.882 [2024-07-16 00:56:29.680528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.882 [2024-07-16 00:56:29.680793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.882 [2024-07-16 00:56:29.681057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.882 [2024-07-16 00:56:29.681070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.882 [2024-07-16 00:56:29.681080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.882 [2024-07-16 00:56:29.685335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.882 [2024-07-16 00:56:29.694603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.882 [2024-07-16 00:56:29.695077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.882 [2024-07-16 00:56:29.695099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.882 [2024-07-16 00:56:29.695110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.882 [2024-07-16 00:56:29.695379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.882 [2024-07-16 00:56:29.695645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.882 [2024-07-16 00:56:29.695658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.882 [2024-07-16 00:56:29.695668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.882 [2024-07-16 00:56:29.699912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:11.882 [2024-07-16 00:56:29.709175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:11.882 [2024-07-16 00:56:29.709644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.882 [2024-07-16 00:56:29.709671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:11.882 [2024-07-16 00:56:29.709681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:11.882 [2024-07-16 00:56:29.709945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:11.882 [2024-07-16 00:56:29.710210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:11.882 [2024-07-16 00:56:29.710223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:11.882 [2024-07-16 00:56:29.710232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:11.882 [2024-07-16 00:56:29.714491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.141 [2024-07-16 00:56:29.723767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.141 [2024-07-16 00:56:29.724177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.141 [2024-07-16 00:56:29.724199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.141 [2024-07-16 00:56:29.724209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.724482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.724748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.724760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.724770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.729015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.738550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.739086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.739108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.739119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.739390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.739658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.739670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.739680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.743920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.753191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.753760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.753785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.753796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.754059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.754337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.754351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.754361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.758607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.767839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.768388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.768411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.768421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.768685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.768951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.768964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.768973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.773229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.782508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.783043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.783065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.783075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.783345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.783613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.783625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.783635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.787877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.797141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.797671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.797694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.797705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.797969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.798236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.798248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.798264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.802513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.811795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.812333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.812355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.812366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.812631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.812897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.812909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.812918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.817172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.826453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.827018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.827040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.827050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.827322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.827587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.827600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.827610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.831856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.841127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.841597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.841619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.841630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.841893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.842160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.842172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.842182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.846427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.855699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.856268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.856291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.856305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.856569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.856835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.856848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.856858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.861103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.870401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.870894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.870916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.142 [2024-07-16 00:56:29.870926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.142 [2024-07-16 00:56:29.871190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.142 [2024-07-16 00:56:29.871462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.142 [2024-07-16 00:56:29.871475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.142 [2024-07-16 00:56:29.871485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.142 [2024-07-16 00:56:29.875728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.142 [2024-07-16 00:56:29.884997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.142 [2024-07-16 00:56:29.885489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.142 [2024-07-16 00:56:29.885511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.885521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.885785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.886049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.886062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.886071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.890368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.899645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.900181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.900204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.900216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.900486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.900753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.900770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.900779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.905027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.914302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.914859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.914892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.915157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.915431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.915444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.915453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.919701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.928981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.929545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.929568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.929579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.929843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.930110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.930122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.930131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.934378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.943647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.944136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.944158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.944168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.944438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.944704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.944716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.944726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.948970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.958247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.958741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.958762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.958772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.959036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.959307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.959320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.959329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.963572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.143 [2024-07-16 00:56:29.972856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.143 [2024-07-16 00:56:29.973445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.143 [2024-07-16 00:56:29.973468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.143 [2024-07-16 00:56:29.973478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.143 [2024-07-16 00:56:29.973743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.143 [2024-07-16 00:56:29.974009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.143 [2024-07-16 00:56:29.974021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.143 [2024-07-16 00:56:29.974031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.143 [2024-07-16 00:56:29.978283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.402 [2024-07-16 00:56:29.987558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.402 [2024-07-16 00:56:29.988045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.402 [2024-07-16 00:56:29.988066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.402 [2024-07-16 00:56:29.988077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.402 [2024-07-16 00:56:29.988348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.402 [2024-07-16 00:56:29.988615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.402 [2024-07-16 00:56:29.988627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.402 [2024-07-16 00:56:29.988636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.402 [2024-07-16 00:56:29.992882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.402 [2024-07-16 00:56:30.002318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.402 [2024-07-16 00:56:30.003105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.402 [2024-07-16 00:56:30.003137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.402 [2024-07-16 00:56:30.003154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.402 [2024-07-16 00:56:30.003487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.402 [2024-07-16 00:56:30.003805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.402 [2024-07-16 00:56:30.003824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.402 [2024-07-16 00:56:30.003837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.402 [2024-07-16 00:56:30.008667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.402 [2024-07-16 00:56:30.016960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.402 [2024-07-16 00:56:30.017435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.402 [2024-07-16 00:56:30.017459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.402 [2024-07-16 00:56:30.017470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.402 [2024-07-16 00:56:30.017736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.402 [2024-07-16 00:56:30.018003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.402 [2024-07-16 00:56:30.018015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.402 [2024-07-16 00:56:30.018025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.402 [2024-07-16 00:56:30.022274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.402 [2024-07-16 00:56:30.031551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.402 [2024-07-16 00:56:30.032065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.402 [2024-07-16 00:56:30.032087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.032099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.032371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.032638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.032651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.032660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.036910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.046996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.047572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.047599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.047612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.047879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.048146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.048158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.048173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.052430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.061709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.062284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.062295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.062560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.062825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.062838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.062847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.067096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.076399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.076860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.076882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.076893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.077156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.077428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.077442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.077452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.081692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.090959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.091529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.091551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.091562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.091826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.092091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.092103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.092113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.096362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.105631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.106117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.106168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.106190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.106788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.107326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.107339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.107349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.111590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.120344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.120907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.120930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.120941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.121205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.121478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.121491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.121501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.125739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.135002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.135546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.135568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.135578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.135842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.136108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.136120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.136130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.140365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.149660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.150225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.150247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.150264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.150530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.150799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.150811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.150822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.155064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.164325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.164894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.164916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.164926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.165189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.165462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.165475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.165484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.169727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.179005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.179573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.179615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.179637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.180195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.403 [2024-07-16 00:56:30.180466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.403 [2024-07-16 00:56:30.180480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.403 [2024-07-16 00:56:30.180489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.403 [2024-07-16 00:56:30.184729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.403 [2024-07-16 00:56:30.193738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.403 [2024-07-16 00:56:30.194333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.403 [2024-07-16 00:56:30.194377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.403 [2024-07-16 00:56:30.194399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.403 [2024-07-16 00:56:30.194976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.404 [2024-07-16 00:56:30.195320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.404 [2024-07-16 00:56:30.195333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.404 [2024-07-16 00:56:30.195344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.404 [2024-07-16 00:56:30.199589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.404 [2024-07-16 00:56:30.208357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.404 [2024-07-16 00:56:30.208928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.404 [2024-07-16 00:56:30.208970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.404 [2024-07-16 00:56:30.208992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.404 [2024-07-16 00:56:30.209540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.404 [2024-07-16 00:56:30.209806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.404 [2024-07-16 00:56:30.209818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.404 [2024-07-16 00:56:30.209828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.404 [2024-07-16 00:56:30.214324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.404 [2024-07-16 00:56:30.223096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.404 [2024-07-16 00:56:30.223642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.404 [2024-07-16 00:56:30.223665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.404 [2024-07-16 00:56:30.223676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.404 [2024-07-16 00:56:30.223941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.404 [2024-07-16 00:56:30.224207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.404 [2024-07-16 00:56:30.224220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.404 [2024-07-16 00:56:30.224229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.404 [2024-07-16 00:56:30.228478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.404 [2024-07-16 00:56:30.237744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.404 [2024-07-16 00:56:30.238363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.404 [2024-07-16 00:56:30.238386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.404 [2024-07-16 00:56:30.238396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.404 [2024-07-16 00:56:30.238660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.404 [2024-07-16 00:56:30.238926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.404 [2024-07-16 00:56:30.238938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.404 [2024-07-16 00:56:30.238947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.243193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.252469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.253008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.253030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.253046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.253316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.253582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.253595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.253605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.257840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.267103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.267560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.267582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.267593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.267856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.268121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.268134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.268143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.272403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.281678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.282160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.282202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.282224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.282725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.282991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.283003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.283013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.287263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.296282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.296753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.296796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.296818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.297412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.297705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.297721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.297731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.301966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.310983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.311559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.311603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.311625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.312202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.312744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.312758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.312768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.317015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.325529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.326016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.326038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.326048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.326319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.326585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.326598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.326608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.330846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.340128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.340674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.340707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.340972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.341237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.341250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.341268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.345514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.354809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.355313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.355336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.355347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.355614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.355884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.355898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.355913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.360230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.369540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.370017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.370040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.370050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.370333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.664 [2024-07-16 00:56:30.370599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.664 [2024-07-16 00:56:30.370612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.664 [2024-07-16 00:56:30.370622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.664 [2024-07-16 00:56:30.374865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.664 [2024-07-16 00:56:30.384140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.664 [2024-07-16 00:56:30.384714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.664 [2024-07-16 00:56:30.384737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.664 [2024-07-16 00:56:30.384747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.664 [2024-07-16 00:56:30.385011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.385284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.385297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.385307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.389544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.398810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.399358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.399381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.399392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.399661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.399926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.399939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.399949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.404210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.413494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.414058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.414081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.414092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.414365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.414631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.414644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.414653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.418900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.428165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.428734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.428756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.428767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.429031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.429307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.429321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.429330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.433583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.442848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.443411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.443432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.443442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.443705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.443969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.443982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.443995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.448237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.457507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.458057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.458078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.458089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.458360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.458625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.458637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.458647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.462897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.472174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.472720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.472742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.472753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.473016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.473289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.473302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.473313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.477548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.665 [2024-07-16 00:56:30.486813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.665 [2024-07-16 00:56:30.487386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.665 [2024-07-16 00:56:30.487430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.665 [2024-07-16 00:56:30.487451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.665 [2024-07-16 00:56:30.488028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.665 [2024-07-16 00:56:30.488322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.665 [2024-07-16 00:56:30.488336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.665 [2024-07-16 00:56:30.488345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.665 [2024-07-16 00:56:30.494346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.502088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.502669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.502719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.502741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.503334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.503623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.503636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.503645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.507890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.516660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.517205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.517248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.517285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.517865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.518316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.518330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.518341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.522580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.531378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.531924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.531966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.531988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.532510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.532778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.532790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.532800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.537037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.546049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.546625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.546668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.546690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.547149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.547427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.547441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.547450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.551691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.560718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.561286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.561309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.561320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.561583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.561848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.561860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.561870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.566115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.575415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.575982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.576026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.576049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.576638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.577220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.577245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.577276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.581569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.590080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.590652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.590695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.590717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.591238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.591511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.591525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.591535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.595784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.604824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.605396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.605438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.605459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.606036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.606409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.606423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.606433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.610675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.619444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.620011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.620033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.620043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.620314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.620579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.620591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.926 [2024-07-16 00:56:30.620600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.926 [2024-07-16 00:56:30.624837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.926 [2024-07-16 00:56:30.634103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.926 [2024-07-16 00:56:30.634593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.926 [2024-07-16 00:56:30.634615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.926 [2024-07-16 00:56:30.634625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.926 [2024-07-16 00:56:30.634889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.926 [2024-07-16 00:56:30.635155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.926 [2024-07-16 00:56:30.635167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.635176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.639420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.648675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.649236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.649296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.649326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.649905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.650496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.650534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.650547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.656384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.663768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.664336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.664378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.664399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.664977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.665572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.665585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.665594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.669836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.678354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.678920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.678942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.678952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.679216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.679490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.679503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.679513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.683755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.693023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.693603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.693646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.693667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.694244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.694801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.694818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.694827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.699072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.707581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.708137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.708179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.708199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.708730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.708996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.709009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.709018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.713252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.722323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.722884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.722906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.722916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.723181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.723453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.723466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.723476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.727716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.736977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.737498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.737520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.737531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.737793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.738057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.738069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.738078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.742327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:12.927 [2024-07-16 00:56:30.751599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:12.927 [2024-07-16 00:56:30.752173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.927 [2024-07-16 00:56:30.752195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:12.927 [2024-07-16 00:56:30.752206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:12.927 [2024-07-16 00:56:30.752478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:12.927 [2024-07-16 00:56:30.752744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:12.927 [2024-07-16 00:56:30.752757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:12.927 [2024-07-16 00:56:30.752767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:12.927 [2024-07-16 00:56:30.757011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.766301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.766781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.766823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.766845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.767438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.767940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.767953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.767963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.772221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.781133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.781712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.781734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.781744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.782007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.782277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.782291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.782301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.786543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.795809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.796372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.796394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.796405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.796674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.796940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.796953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.796962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.801207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.810463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.811040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.811082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.811103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.811693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.811960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.811973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.811982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.817833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.825605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.826180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.826223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.826244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.826774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.827041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.827053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.827063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.831306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.840313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.840882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.840903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.840914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.841177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.841449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.841462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.841476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.845713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.854971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.855512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.188 [2024-07-16 00:56:30.855534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.188 [2024-07-16 00:56:30.855544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.188 [2024-07-16 00:56:30.855809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.188 [2024-07-16 00:56:30.856074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.188 [2024-07-16 00:56:30.856086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.188 [2024-07-16 00:56:30.856096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.188 [2024-07-16 00:56:30.860353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.188 [2024-07-16 00:56:30.869620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.188 [2024-07-16 00:56:30.870184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.870227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.870248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.870850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.871139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.871152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.871162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.875412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.884162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.884659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.884680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.884691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.884955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.885222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.885234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.885244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.889490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.898759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.899335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.899386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.899407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.899975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.900240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.900252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.900270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.904507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.913516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.914079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.914099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.914110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.914383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.914650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.914663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.914672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.918912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.928175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.928750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.928771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.928781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.929044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.929318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.929331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.929341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.933583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.942857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.943407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.943429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.943439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.943702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.943971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.943984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.943993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.948238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.957503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.958062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.958083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.958093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.958364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.958630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.958642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.958651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.962894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.972164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.972725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.972767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.972789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.973345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.973612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.973625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.973635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.977882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:30.986890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:30.987460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:30.987502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:30.987523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:30.988105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:30.988380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:30.988393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:30.988403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:30.992654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:31.001660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:31.002154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:31.002197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:31.002218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:31.002810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:31.003402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:31.003428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:31.003448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:31.007712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.189 [2024-07-16 00:56:31.016234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.189 [2024-07-16 00:56:31.016798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.189 [2024-07-16 00:56:31.016820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.189 [2024-07-16 00:56:31.016830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.189 [2024-07-16 00:56:31.017094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.189 [2024-07-16 00:56:31.017364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.189 [2024-07-16 00:56:31.017377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.189 [2024-07-16 00:56:31.017387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.189 [2024-07-16 00:56:31.021629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.030901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.031471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.031515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.031538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.032116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.032707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.450 [2024-07-16 00:56:31.032733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.450 [2024-07-16 00:56:31.032761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.450 [2024-07-16 00:56:31.037009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.045528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.046004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.046025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.046040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.046311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.046578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.450 [2024-07-16 00:56:31.046591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.450 [2024-07-16 00:56:31.046600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.450 [2024-07-16 00:56:31.050839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.060107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.060690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.060732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.060754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.061324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.061592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.450 [2024-07-16 00:56:31.061604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.450 [2024-07-16 00:56:31.061613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.450 [2024-07-16 00:56:31.065849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.074868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.075434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.075456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.075467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.075731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.075995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.450 [2024-07-16 00:56:31.076008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.450 [2024-07-16 00:56:31.076017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.450 [2024-07-16 00:56:31.080269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.089541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.090080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.090102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.090111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.090382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.090649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.450 [2024-07-16 00:56:31.090665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.450 [2024-07-16 00:56:31.090675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.450 [2024-07-16 00:56:31.094924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.450 [2024-07-16 00:56:31.104194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.450 [2024-07-16 00:56:31.104763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.450 [2024-07-16 00:56:31.104786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.450 [2024-07-16 00:56:31.104796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.450 [2024-07-16 00:56:31.105059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.450 [2024-07-16 00:56:31.105330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.105344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.105353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.109597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.118872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.119438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.119460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.119470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.119734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.119999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.120011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.120021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.124272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.133541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.134021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.134042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.134053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.134326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.134592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.134605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.134615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.138859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.148163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.148737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.148758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.148768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.149032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.149303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.149316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.149326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.153570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.162845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.163411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.163434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.163444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.163708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.163972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.163985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.163994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.168240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.177519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.178000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.178021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.178032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.178305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.178571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.178584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.178593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.182835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.192109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.192684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.192705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.192715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.192982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.193247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.193268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.193277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.197524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.206792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.207327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.207349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.207360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.207623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.207888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.207900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.207909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.212388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.221417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.221988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.222011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.222021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.222292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.222557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.222570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.222579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.226826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.236100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.236592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.236615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.236626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.236890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.237155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.237168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.237182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.241440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.250731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.251218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.251240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.251251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.251524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.451 [2024-07-16 00:56:31.251789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.451 [2024-07-16 00:56:31.251802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.451 [2024-07-16 00:56:31.251812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.451 [2024-07-16 00:56:31.256064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.451 [2024-07-16 00:56:31.265365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.451 [2024-07-16 00:56:31.265855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.451 [2024-07-16 00:56:31.265878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.451 [2024-07-16 00:56:31.265888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.451 [2024-07-16 00:56:31.266152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.452 [2024-07-16 00:56:31.266426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.452 [2024-07-16 00:56:31.266439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.452 [2024-07-16 00:56:31.266448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.452 [2024-07-16 00:56:31.270711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.452 [2024-07-16 00:56:31.280008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.452 [2024-07-16 00:56:31.280442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.452 [2024-07-16 00:56:31.280464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.452 [2024-07-16 00:56:31.280475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.452 [2024-07-16 00:56:31.280738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.452 [2024-07-16 00:56:31.281005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.452 [2024-07-16 00:56:31.281018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.452 [2024-07-16 00:56:31.281028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.452 [2024-07-16 00:56:31.285290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.711 [2024-07-16 00:56:31.294581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.711 [2024-07-16 00:56:31.295054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.711 [2024-07-16 00:56:31.295080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.295091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.295364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.295631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.295644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.295653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.299902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.309192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.309744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.309787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.309808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.310327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.310593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.310606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.310616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.314871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.323910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.324494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.324536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.324559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.325088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.325361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.325375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.325385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.329630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.338649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.339230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.339285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.339308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.339887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.340233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.340251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.340273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.346505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.353723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.354296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.354339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.354361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.354937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.355236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.355249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.355267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.359521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.368322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.368750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.368792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.368814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.369376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.369643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.369656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.369665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.373928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.382971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.383421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.383463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.383485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.384062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.384369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.384383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.384393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.388643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.397669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.398206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.398228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.398238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.398510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.398777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.398789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.398799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.403044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.412317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.412727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.412749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.412759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.413023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.413294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.413308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.413318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.417569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.427103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.427673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.712 [2024-07-16 00:56:31.427696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.712 [2024-07-16 00:56:31.427708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.712 [2024-07-16 00:56:31.427972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.712 [2024-07-16 00:56:31.428238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.712 [2024-07-16 00:56:31.428250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.712 [2024-07-16 00:56:31.428267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.712 [2024-07-16 00:56:31.432512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.712 [2024-07-16 00:56:31.441781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.712 [2024-07-16 00:56:31.442323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.442346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.442361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.442625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.442890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.442902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.442912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.447158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.456450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.457019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.457040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.457050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.457321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.457587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.457599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.457609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.461858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.471159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.471703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.471725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.471736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.472000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.472272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.472285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.472295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.476540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.485815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.486295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.486318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.486328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.486592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.486858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.486877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.486888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.491136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.500409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.500968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.500989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.501000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.501271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.501539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.501551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.501561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.505801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.515079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.515649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.515671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.515682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.515945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.516211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.516223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.516233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.520486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.529759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.530295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.530317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.530329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.530594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.530859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.530871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.530881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.713 [2024-07-16 00:56:31.535130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.713 [2024-07-16 00:56:31.544400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.713 [2024-07-16 00:56:31.544920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.713 [2024-07-16 00:56:31.544942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.713 [2024-07-16 00:56:31.544952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.713 [2024-07-16 00:56:31.545216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.713 [2024-07-16 00:56:31.545490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.713 [2024-07-16 00:56:31.545503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.713 [2024-07-16 00:56:31.545513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.973 [2024-07-16 00:56:31.549757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.973 [2024-07-16 00:56:31.559024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.973 [2024-07-16 00:56:31.559542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.973 [2024-07-16 00:56:31.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.973 [2024-07-16 00:56:31.559575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.973 [2024-07-16 00:56:31.559840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.973 [2024-07-16 00:56:31.560105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.973 [2024-07-16 00:56:31.560118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.973 [2024-07-16 00:56:31.560127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.973 [2024-07-16 00:56:31.564372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.973 [2024-07-16 00:56:31.573662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.973 [2024-07-16 00:56:31.574145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.973 [2024-07-16 00:56:31.574167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.973 [2024-07-16 00:56:31.574181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.973 [2024-07-16 00:56:31.574453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.973 [2024-07-16 00:56:31.574719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.973 [2024-07-16 00:56:31.574733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.973 [2024-07-16 00:56:31.574744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.973 [2024-07-16 00:56:31.578993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.973 [2024-07-16 00:56:31.588276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.973 [2024-07-16 00:56:31.588759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.973 [2024-07-16 00:56:31.588781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.973 [2024-07-16 00:56:31.588791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.973 [2024-07-16 00:56:31.589059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.973 [2024-07-16 00:56:31.589333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.973 [2024-07-16 00:56:31.589347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.973 [2024-07-16 00:56:31.589358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.973 [2024-07-16 00:56:31.593602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.973 [2024-07-16 00:56:31.602888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.973 [2024-07-16 00:56:31.603381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.973 [2024-07-16 00:56:31.603403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.973 [2024-07-16 00:56:31.603414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.973 [2024-07-16 00:56:31.603679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.973 [2024-07-16 00:56:31.603943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.973 [2024-07-16 00:56:31.603956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.973 [2024-07-16 00:56:31.603965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.973 [2024-07-16 00:56:31.608218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.973 [2024-07-16 00:56:31.617497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.617990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.618011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.618021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.618291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.618558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.618571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.618580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.622828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.632103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.632568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.632590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.632601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.632865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.633131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.633143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.633156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.637402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.646680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.647216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.647238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.647248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.647519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.647785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.647798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.647808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.652058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.661321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.661897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.661939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.661961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.662553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.663051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.663063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.663072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.667324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.676104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.676582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.676604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.676615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.676878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.677144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.677156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.677165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.681413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.690692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.691229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.691261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.691272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.691537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.691803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.691815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.691825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.696070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.705333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.705910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.705952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.705974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.706567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.707152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.707164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.707174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.711421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.719938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.720473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.720496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.720507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.720770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.721035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.721048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.721057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3217779 Killed "${NVMF_APP[@]}" "$@" 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:13.974 [2024-07-16 00:56:31.725328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3219381 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3219381 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3219381 ']' 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.974 00:56:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.974 [2024-07-16 00:56:31.734599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.735163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.735185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.735195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.974 [2024-07-16 00:56:31.735466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.974 [2024-07-16 00:56:31.735732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.974 [2024-07-16 00:56:31.735744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.974 [2024-07-16 00:56:31.735754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.974 [2024-07-16 00:56:31.740006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.974 [2024-07-16 00:56:31.749501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.974 [2024-07-16 00:56:31.750039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.974 [2024-07-16 00:56:31.750062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.974 [2024-07-16 00:56:31.750072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.975 [2024-07-16 00:56:31.750344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.975 [2024-07-16 00:56:31.750610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.975 [2024-07-16 00:56:31.750623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.975 [2024-07-16 00:56:31.750633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.975 [2024-07-16 00:56:31.754870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.975 [2024-07-16 00:56:31.764130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.975 [2024-07-16 00:56:31.764692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.975 [2024-07-16 00:56:31.764714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.975 [2024-07-16 00:56:31.764725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.975 [2024-07-16 00:56:31.764988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.975 [2024-07-16 00:56:31.765261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.975 [2024-07-16 00:56:31.765278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.975 [2024-07-16 00:56:31.765288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.975 [2024-07-16 00:56:31.769530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.975 [2024-07-16 00:56:31.778811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.975 [2024-07-16 00:56:31.779372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.975 [2024-07-16 00:56:31.779395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.975 [2024-07-16 00:56:31.779405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.975 [2024-07-16 00:56:31.779670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.975 [2024-07-16 00:56:31.779935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.975 [2024-07-16 00:56:31.779948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.975 [2024-07-16 00:56:31.779957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.975 [2024-07-16 00:56:31.784204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.975 [2024-07-16 00:56:31.785179] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:13.975 [2024-07-16 00:56:31.785233] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.975 [2024-07-16 00:56:31.793483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.975 [2024-07-16 00:56:31.794050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.975 [2024-07-16 00:56:31.794072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.975 [2024-07-16 00:56:31.794082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.975 [2024-07-16 00:56:31.794353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.975 [2024-07-16 00:56:31.794620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.975 [2024-07-16 00:56:31.794632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.975 [2024-07-16 00:56:31.794642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.975 [2024-07-16 00:56:31.799007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:13.975 [2024-07-16 00:56:31.808033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.975 [2024-07-16 00:56:31.808600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.975 [2024-07-16 00:56:31.808623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:13.975 [2024-07-16 00:56:31.808634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:13.975 [2024-07-16 00:56:31.808899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:13.975 [2024-07-16 00:56:31.809165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:13.975 [2024-07-16 00:56:31.809181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:13.975 [2024-07-16 00:56:31.809191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.235 [2024-07-16 00:56:31.813444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.235 [2024-07-16 00:56:31.822719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.235 [2024-07-16 00:56:31.823206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-16 00:56:31.823228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.235 [2024-07-16 00:56:31.823239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.235 [2024-07-16 00:56:31.823509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.235 [2024-07-16 00:56:31.823775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.235 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.235 [2024-07-16 00:56:31.823788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.235 [2024-07-16 00:56:31.823798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.235 [2024-07-16 00:56:31.828046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.235 [2024-07-16 00:56:31.837316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.235 [2024-07-16 00:56:31.837856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.837878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.837888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.838151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.838424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.838436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.838447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.842702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.851969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.852541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.852563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.852574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.852838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.853103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.853115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.853125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.857366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.866641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.867118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.867140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.867150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.867422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.867689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.867701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.867710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.871957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.875941] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.236 [2024-07-16 00:56:31.881228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.881721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.881744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.881754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.882019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.882292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.882305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.882315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.886554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.895829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.896388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.896410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.896421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.896684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.896949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.896962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.896972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.901212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.910482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.911051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.911073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.911090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.911360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.911626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.911638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.911648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.915895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.925162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.925711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.925732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.925743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.926007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.926279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.926292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.926302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.930546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.939814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.940375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.940400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.940411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.940676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.940943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.940955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.940965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.945210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.954484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.955049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.955072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.955082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.955353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.955619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.955637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.955647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.959890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.969155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.236 [2024-07-16 00:56:31.969658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-16 00:56:31.969680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.236 [2024-07-16 00:56:31.969691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.236 [2024-07-16 00:56:31.969955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.236 [2024-07-16 00:56:31.970219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.236 [2024-07-16 00:56:31.970231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.236 [2024-07-16 00:56:31.970241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.236 [2024-07-16 00:56:31.974503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.236 [2024-07-16 00:56:31.982526] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.236 [2024-07-16 00:56:31.982564] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.236 [2024-07-16 00:56:31.982577] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.236 [2024-07-16 00:56:31.982588] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.236 [2024-07-16 00:56:31.982599] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.237 [2024-07-16 00:56:31.982652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.237 [2024-07-16 00:56:31.982764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.237 [2024-07-16 00:56:31.982766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.237 [2024-07-16 00:56:31.983783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:31.984353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:31.984376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:31.984387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:31.984652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:31.984917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:31.984929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:31.984938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:31.989193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:31.998474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:31.999050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:31.999073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:31.999090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:31.999360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:31.999628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:31.999641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:31.999650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:32.003898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:32.013180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:32.013743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:32.013768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:32.013779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:32.014044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:32.014317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:32.014331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:32.014341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:32.018581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:32.027854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:32.028453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:32.028477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:32.028487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:32.028751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:32.029017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:32.029029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:32.029039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:32.033293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:32.042576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:32.043120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:32.043143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:32.043154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:32.043424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:32.043692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:32.043711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:32.043721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:32.047965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:32.057250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:32.057716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:32.057738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:32.057749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:32.058013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:32.058285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:32.058298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:32.058308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.237 [2024-07-16 00:56:32.062550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.237 [2024-07-16 00:56:32.071828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.237 [2024-07-16 00:56:32.072355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-16 00:56:32.072378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.237 [2024-07-16 00:56:32.072388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.237 [2024-07-16 00:56:32.072652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.237 [2024-07-16 00:56:32.072919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.237 [2024-07-16 00:56:32.072931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.237 [2024-07-16 00:56:32.072941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.497 [2024-07-16 00:56:32.077184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.497 [2024-07-16 00:56:32.086455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.497 [2024-07-16 00:56:32.087019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.497 [2024-07-16 00:56:32.087041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.497 [2024-07-16 00:56:32.087052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.497 [2024-07-16 00:56:32.087322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.497 [2024-07-16 00:56:32.087589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.497 [2024-07-16 00:56:32.087601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.497 [2024-07-16 00:56:32.087612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.497 [2024-07-16 00:56:32.091861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.497 [2024-07-16 00:56:32.101131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.497 [2024-07-16 00:56:32.101683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.497 [2024-07-16 00:56:32.101704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.497 [2024-07-16 00:56:32.101715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.497 [2024-07-16 00:56:32.101979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.497 [2024-07-16 00:56:32.102245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.497 [2024-07-16 00:56:32.102264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.497 [2024-07-16 00:56:32.102274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.497 [2024-07-16 00:56:32.106516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.497 [2024-07-16 00:56:32.115792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.497 [2024-07-16 00:56:32.116328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.116350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.116361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.116626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.116891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.116904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.116913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.121152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.130468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.131035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.131056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.131067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.131338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.131603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.131616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.131625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.135867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.145138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.145680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.145702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.145712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.145980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.146247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.146266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.146276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.150521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.159797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.160356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.160378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.160390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.160652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.160918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.160930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.160939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.165185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.174465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.175027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.175049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.175059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.175329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.175595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.175607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.175617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.179865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.189127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.189694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.189716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.189727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.189992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.190264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.190277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.190290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.194535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.203798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.204358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.204381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.204392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.204656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.204922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.204934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.204944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.209187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.218466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.219014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.219036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.219047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.219318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.219585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.219598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.219607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.223850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.233116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.233683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.233705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.233716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.233980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.234245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.234265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.234275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.238510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.247780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.248341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.248371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.248382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.248647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.248913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.498 [2024-07-16 00:56:32.248925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.498 [2024-07-16 00:56:32.248934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.498 [2024-07-16 00:56:32.253179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.498 [2024-07-16 00:56:32.262461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.498 [2024-07-16 00:56:32.262949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.498 [2024-07-16 00:56:32.262971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.498 [2024-07-16 00:56:32.262982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.498 [2024-07-16 00:56:32.263247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.498 [2024-07-16 00:56:32.263520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.499 [2024-07-16 00:56:32.263533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.499 [2024-07-16 00:56:32.263543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.499 [2024-07-16 00:56:32.267782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.499 [2024-07-16 00:56:32.277062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.499 [2024-07-16 00:56:32.277638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.499 [2024-07-16 00:56:32.277660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.499 [2024-07-16 00:56:32.277671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.499 [2024-07-16 00:56:32.277935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.499 [2024-07-16 00:56:32.278201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.499 [2024-07-16 00:56:32.278213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.499 [2024-07-16 00:56:32.278223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.499 [2024-07-16 00:56:32.282469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.499 [2024-07-16 00:56:32.291739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.499 [2024-07-16 00:56:32.292305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.499 [2024-07-16 00:56:32.292327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.499 [2024-07-16 00:56:32.292337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.499 [2024-07-16 00:56:32.292601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.499 [2024-07-16 00:56:32.292871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.499 [2024-07-16 00:56:32.292883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.499 [2024-07-16 00:56:32.292893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.499 [2024-07-16 00:56:32.297139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.499 [2024-07-16 00:56:32.306414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.499 [2024-07-16 00:56:32.306976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.499 [2024-07-16 00:56:32.306999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.499 [2024-07-16 00:56:32.307010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.499 [2024-07-16 00:56:32.307280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.499 [2024-07-16 00:56:32.307545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.499 [2024-07-16 00:56:32.307558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.499 [2024-07-16 00:56:32.307568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.499 [2024-07-16 00:56:32.311812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.499 [2024-07-16 00:56:32.321076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.499 [2024-07-16 00:56:32.321644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.499 [2024-07-16 00:56:32.321666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.499 [2024-07-16 00:56:32.321676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.499 [2024-07-16 00:56:32.321940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.499 [2024-07-16 00:56:32.322206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.499 [2024-07-16 00:56:32.322218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.499 [2024-07-16 00:56:32.322228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.499 [2024-07-16 00:56:32.326479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.759 [2024-07-16 00:56:32.335749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.759 [2024-07-16 00:56:32.336285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.759 [2024-07-16 00:56:32.336307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.759 [2024-07-16 00:56:32.336318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.759 [2024-07-16 00:56:32.336582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.759 [2024-07-16 00:56:32.336848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.759 [2024-07-16 00:56:32.336860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.759 [2024-07-16 00:56:32.336870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.759 [2024-07-16 00:56:32.341123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.759 [2024-07-16 00:56:32.350406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.759 [2024-07-16 00:56:32.350951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.759 [2024-07-16 00:56:32.350972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.759 [2024-07-16 00:56:32.350983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.759 [2024-07-16 00:56:32.351247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.759 [2024-07-16 00:56:32.351520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.759 [2024-07-16 00:56:32.351532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.759 [2024-07-16 00:56:32.351542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.759 [2024-07-16 00:56:32.355787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.759 [2024-07-16 00:56:32.365062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.759 [2024-07-16 00:56:32.365603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.759 [2024-07-16 00:56:32.365625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.759 [2024-07-16 00:56:32.365636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.759 [2024-07-16 00:56:32.365900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.759 [2024-07-16 00:56:32.366165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.759 [2024-07-16 00:56:32.366178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.759 [2024-07-16 00:56:32.366187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.759 [2024-07-16 00:56:32.370443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.759 [2024-07-16 00:56:32.379714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.759 [2024-07-16 00:56:32.380201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.759 [2024-07-16 00:56:32.380223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.759 [2024-07-16 00:56:32.380233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.759 [2024-07-16 00:56:32.380503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.759 [2024-07-16 00:56:32.380769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.759 [2024-07-16 00:56:32.380782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.759 [2024-07-16 00:56:32.380791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.759 [2024-07-16 00:56:32.385023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.759 [2024-07-16 00:56:32.394307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.759 [2024-07-16 00:56:32.394867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.759 [2024-07-16 00:56:32.394888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.759 [2024-07-16 00:56:32.394903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.759 [2024-07-16 00:56:32.395167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.759 [2024-07-16 00:56:32.395439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.759 [2024-07-16 00:56:32.395452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.759 [2024-07-16 00:56:32.395462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.759 [2024-07-16 00:56:32.399708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.408975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.409547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.409569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.409579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.409843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.410108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.410120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.410130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.414374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.423651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.424180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.424201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.424212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.424484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.424752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.424765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.424774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.429016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.438288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.438851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.438872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.438883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.439147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.439419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.439436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.439446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.443682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.452975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.453463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.453485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.453495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.453761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.454027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.454040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.454049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.458297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.467564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.468100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.468122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.468132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.468402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.468667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.468680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.468689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.472944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.482216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.482739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.482761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.482772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.483036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.483307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.483320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.483330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.487570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.496842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.497388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.497410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.497421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.497685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.497951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.497963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.497973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.502220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.511502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.511943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.511965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.511975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.512240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.512512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.512525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.512535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.516779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.526090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.526643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.526665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.526676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.526941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.527207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.527219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.527229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.531498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.540777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.541267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.541290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.541301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.541569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.541836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.541848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.541858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.546102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.555381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.555790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.555811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.555822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.556085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.556357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.556370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.556380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.560627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.570147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.570725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.570748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.570759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.571023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.571295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.571308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.571317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.575562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:14.760 [2024-07-16 00:56:32.584836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:14.760 [2024-07-16 00:56:32.585407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.760 [2024-07-16 00:56:32.585429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:14.760 [2024-07-16 00:56:32.585439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:14.760 [2024-07-16 00:56:32.585703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:14.760 [2024-07-16 00:56:32.585969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:14.760 [2024-07-16 00:56:32.585982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:14.760 [2024-07-16 00:56:32.585995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:14.760 [2024-07-16 00:56:32.590240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.599522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.599999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.600021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.600031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.600303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.600570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.600584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.600595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.604840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.614112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.614587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.614609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.614620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.614884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.615149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.615162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.615172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.619425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.628701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.629243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.629272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.629284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.629548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.629814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.629827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.629837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.634085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.643369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.643935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.643972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.644237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.644510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.644524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.644534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.648782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.658067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.658533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.658555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.658566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.658831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.659096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.659109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.659119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.663371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.672669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.021 [2024-07-16 00:56:32.673136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.021 [2024-07-16 00:56:32.673158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.021 [2024-07-16 00:56:32.673171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.021 [2024-07-16 00:56:32.673445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.021 [2024-07-16 00:56:32.673712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.021 [2024-07-16 00:56:32.673725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.021 [2024-07-16 00:56:32.673734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.021 [2024-07-16 00:56:32.677974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.021 [2024-07-16 00:56:32.687263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.687696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.687718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.687728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.687992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.688270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.688291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.688300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.692548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.701823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.702295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.702317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.702328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.702593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.702858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.702871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.702880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.707173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.716457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.717004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.717026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.717037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.717308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.717575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.717587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.717597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.721844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.731117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.731635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.731657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.731668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.731932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.732198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.732210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.732220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:15.022 [2024-07-16 00:56:32.736475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.022 [2024-07-16 00:56:32.745748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.746242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.746272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.746283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.746547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.746813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.746826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.746836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.751087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.760367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.760777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.760799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.760809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.761073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.761344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.761357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.761367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.765616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.022 [2024-07-16 00:56:32.775153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.775622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.775645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.775656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.775920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.776186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.776203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.776212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.776225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.022 [2024-07-16 00:56:32.780467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.022 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.022 [2024-07-16 00:56:32.789744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.790219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.790251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.790522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.790788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.790801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.790810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.795057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.804349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.804825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.804846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.804857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.805121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.805394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.022 [2024-07-16 00:56:32.805407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.022 [2024-07-16 00:56:32.805417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.022 [2024-07-16 00:56:32.809670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.022 [2024-07-16 00:56:32.818946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.022 [2024-07-16 00:56:32.819640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.022 [2024-07-16 00:56:32.819666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.022 [2024-07-16 00:56:32.819677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.022 [2024-07-16 00:56:32.819945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.022 [2024-07-16 00:56:32.820211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.023 [2024-07-16 00:56:32.820228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.023 [2024-07-16 00:56:32.820238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.023 Malloc0 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.023 [2024-07-16 00:56:32.824498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.023 [2024-07-16 00:56:32.833523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.023 [2024-07-16 00:56:32.833984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.023 [2024-07-16 00:56:32.834006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b6080 with addr=10.0.0.2, port=4420 00:30:15.023 [2024-07-16 00:56:32.834018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b6080 is same with the state(5) to be set 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.023 [2024-07-16 00:56:32.834288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b6080 (9): Bad file descriptor 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.023 [2024-07-16 00:56:32.834555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:15.023 [2024-07-16 00:56:32.834568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:15.023 [2024-07-16 00:56:32.834577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.023 [2024-07-16 00:56:32.838823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:15.023 [2024-07-16 00:56:32.845327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.023 [2024-07-16 00:56:32.848091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.023 00:56:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3218325 00:30:15.282 [2024-07-16 00:56:32.969039] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:25.376 00:30:25.376 Latency(us) 00:30:25.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.376 Verification LBA range: start 0x0 length 0x4000 00:30:25.376 Nvme1n1 : 15.01 3107.52 12.14 8628.33 0.00 10875.72 953.25 38130.04 00:30:25.376 =================================================================================================================== 00:30:25.376 Total : 3107.52 12.14 8628.33 0.00 10875.72 953.25 38130.04 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.376 rmmod nvme_tcp 00:30:25.376 rmmod nvme_fabrics 00:30:25.376 rmmod nvme_keyring 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3219381 ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3219381 ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3219381' 00:30:25.376 killing process with pid 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3219381 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.376 00:56:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.308 00:56:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.308 00:30:26.308 real 0m26.926s 00:30:26.308 user 1m4.121s 00:30:26.308 sys 0m6.531s 00:30:26.308 00:56:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:26.308 00:56:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.308 ************************************ 00:30:26.308 END TEST nvmf_bdevperf 00:30:26.308 ************************************ 00:30:26.308 00:56:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:26.308 00:56:43 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:26.308 00:56:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:26.308 00:56:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.308 00:56:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.308 ************************************ 00:30:26.308 START TEST nvmf_target_disconnect 00:30:26.308 ************************************ 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:26.308 * Looking for test storage... 00:30:26.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.308 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.565 00:56:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:31.832 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:31.832 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:31.832 Found net devices under 0000:af:00.0: cvl_0_0 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:31.832 Found net devices under 0000:af:00.1: cvl_0_1 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:31.832 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:32.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:30:32.091 00:30:32.091 --- 10.0.0.2 ping statistics --- 00:30:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.091 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:32.091 00:30:32.091 --- 10.0.0.1 ping statistics --- 00:30:32.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.091 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.091 ************************************ 00:30:32.091 START TEST nvmf_target_disconnect_tc1 00:30:32.091 ************************************ 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.091 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:32.351 00:56:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:32.351 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.351 [2024-07-16 00:56:50.045575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.351 [2024-07-16 00:56:50.045632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x105ef00 with addr=10.0.0.2, port=4420 00:30:32.351 [2024-07-16 00:56:50.045663] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:32.351 [2024-07-16 00:56:50.045680] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:32.351 [2024-07-16 00:56:50.045689] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:32.351 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:32.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:32.351 Initializing NVMe Controllers 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:32.351 00:30:32.351 real 0m0.132s 00:30:32.351 user 0m0.054s 00:30:32.351 sys 0m0.077s 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:32.351 ************************************ 00:30:32.351 END TEST nvmf_target_disconnect_tc1 00:30:32.351 ************************************ 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.351 ************************************ 00:30:32.351 START TEST nvmf_target_disconnect_tc2 00:30:32.351 ************************************ 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3224708 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3224708 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3224708 ']' 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:32.351 00:56:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.613 [2024-07-16 00:56:50.189820] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:32.613 [2024-07-16 00:56:50.189885] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.613 [2024-07-16 00:56:50.320228] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.959 [2024-07-16 00:56:50.471408] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.959 [2024-07-16 00:56:50.471480] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.959 [2024-07-16 00:56:50.471502] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.959 [2024-07-16 00:56:50.471520] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.959 [2024-07-16 00:56:50.471536] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.959 [2024-07-16 00:56:50.471693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:32.959 [2024-07-16 00:56:50.471743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:32.959 [2024-07-16 00:56:50.471856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:32.959 [2024-07-16 00:56:50.471861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:33.230 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:33.230 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:33.230 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:33.230 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:33.230 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 Malloc0 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 [2024-07-16 00:56:51.134102] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.488 [2024-07-16 00:56:51.166901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:33.488 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.489 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:33.489 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.489 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3224773 00:30:33.489 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:33.489 00:56:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.489 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.394 00:56:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3224708 00:30:35.394 00:56:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 [2024-07-16 00:56:53.202069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 [2024-07-16 00:56:53.202365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 [2024-07-16 00:56:53.202963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Read completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 Write completed with error (sct=0, sc=8) 00:30:35.394 starting I/O failed 00:30:35.394 [2024-07-16 00:56:53.203338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:35.394 [2024-07-16 00:56:53.203667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.203714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.204049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.204081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.204407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.204447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.204728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.204760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.205062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.205092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.205435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.205466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.205822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.205850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.206084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.206116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.206418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.206451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.206655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.206686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.206979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.207010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.207305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.207336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.207562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.207593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.207767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.207798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.208052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.208082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.208300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.208332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.208589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.208633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.394 [2024-07-16 00:56:53.208766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.394 [2024-07-16 00:56:53.208786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.394 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.209049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.209080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.209283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.209315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.209477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.209507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.209721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.209751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.210048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.210078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.210427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.210459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.210696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.210727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.211026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.211057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.211366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.211397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.211674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.211705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.212005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.212036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.212401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.212434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.212608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.212639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.212889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.212920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.213127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.213146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.213352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.213372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.213515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.213534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.213761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.213779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.214041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.214072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.214230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.214288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.214494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.214534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.214712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.214731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.214921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.214941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.215150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.215170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.215373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.215397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.215620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.215639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.215847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.215866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.216060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.216079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.216283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.216303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.216452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.216471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.216797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.216816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.217017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.217037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.217309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.217329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.217578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.217598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.217856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.217875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.218160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.218179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.218360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.218380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.218577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.218597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.218845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.218864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.219055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.219075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.219330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.219361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.219623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.219654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.219921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.219940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.220205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.220248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.220583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.220614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.220908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.220939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.221240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.221283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.221526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.221557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.221881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.221912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.222179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.222210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.222463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.222495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.222751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.222819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.222996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.223030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.223332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.223365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.223662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.223692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.224004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.224355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.224387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.224611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.224641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.224870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.224901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.225054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.225085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.225353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.225384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.225696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.225728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.225934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.225964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.226305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.226336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.226651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.226693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.226988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.227019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.227242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.227285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.227536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.227567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.227780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.227811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.228135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.228166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.228408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.228441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.228765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.228795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.229097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.229128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.229375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.229407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.229699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.229730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.230003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.230034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.395 [2024-07-16 00:56:53.230299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.395 [2024-07-16 00:56:53.230330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.395 qpair failed and we were unable to recover it. 00:30:35.396 [2024-07-16 00:56:53.230567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.396 [2024-07-16 00:56:53.230598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.396 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.230990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.231022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.231293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.231325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.231483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.231515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.231746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.231777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.232024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.232055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.232329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.232361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.232629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.232661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.232985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.233016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.233315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.233347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.233640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.233671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.233900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.233930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.234107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.234138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.234366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.234398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.234803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.234873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.235191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.235225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.235530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.235562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.235791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.235822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.236180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.236211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.236549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.236581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.236833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.236865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.237028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.237059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.237412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.237444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.237599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.237629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.237919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.237949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.238211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.238242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.238564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.238597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.238910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.238941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.239264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.239297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.239548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.239579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.239875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.239905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.240207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.240461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.240492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.240792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.240823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.241117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.241148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.663 [2024-07-16 00:56:53.241378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.663 [2024-07-16 00:56:53.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.663 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.241681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.241712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.241921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.241953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.242196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.242226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.242533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.242565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.242770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.242801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.243008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.243044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.243375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.243408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.243677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.243708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.243924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.243955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.244160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.244190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.244511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.244543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.244842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.244891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.245098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.245129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.245430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.245462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.245780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.245811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.245970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.246001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.246304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.246335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.246549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.246580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.246875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.246905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.247147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.247178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.247460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.247794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.247824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.248065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.248095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.248405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.248437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.248669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.248700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.248998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.249029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.249327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.249358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.249561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.249591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.249893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.249923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.250218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.250249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.250554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.250586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.250877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.251065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.251096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.251424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.251456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.251769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.252085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.252115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.252414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.252445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.252645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.252821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.252851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.253090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.253120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.253410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.253442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.253736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.253766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.254068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.254338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.254369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.254630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.254661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.254946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.254977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.255292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.255330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.255589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.255620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.255828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.255859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.256094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.256125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.256408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.256440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.256750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.256781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.257124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.257154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.257463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.257494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.257809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.257841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.258119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.664 [2024-07-16 00:56:53.258151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.664 qpair failed and we were unable to recover it. 00:30:35.664 [2024-07-16 00:56:53.258354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.258386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.258607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.258638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.258935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.258967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.259270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.259302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.259628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.259660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.259873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.259905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.260199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.260230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.260534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.260565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.260855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.260886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.261158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.261190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.261397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.261428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.261707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.261738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.262033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.262064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.262278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.262310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.262637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.262669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.262965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.262997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.263298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.263331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.263536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.263573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.263894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.263925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.264174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.264206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.264541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.264573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.264788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.264819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.265090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.265121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.265470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.265502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.265775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.265805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.265972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.266003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.266224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.266267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.266497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.266528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.266741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.266772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.266990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.267021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.267250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.267295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.267520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.267552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.267789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.267820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.268035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.268067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.268318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.268350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.268672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.268703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.269005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.269036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.269169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.269200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.269432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.269464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.269697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.269728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.269936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.269967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.270252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.270293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.270617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.270648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.270954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.270985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.271289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.271321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.271535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.271566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.271877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.271909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.272129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.272160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.272464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.272496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.272650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.272681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.273015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.273046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.273315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.273347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.273653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.273684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.273910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.273942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.274148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.274179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.274498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.274529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.665 qpair failed and we were unable to recover it. 00:30:35.665 [2024-07-16 00:56:53.274818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.665 [2024-07-16 00:56:53.274849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.275163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.275195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.275417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.275454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.275756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.275788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.276080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.276111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.276276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.276308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.276452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.276482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.276756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.276787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.276992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.277023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.277232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.277274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.277608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.277639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.277937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.277969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.278184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.278216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.278525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.278557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.278875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.278906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.279143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.279174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.279502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.279536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.279773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.279803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.280077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.280108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.280410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.280457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.280774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.280805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.281010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.281042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.281346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.281378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.281691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.281722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.666 qpair failed and we were unable to recover it. 00:30:35.666 [2024-07-16 00:56:53.282018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.666 [2024-07-16 00:56:53.282050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.282271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.282303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.282509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.282540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.282843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.282874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.283152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.283183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.283400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.283439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.283751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.283782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.284059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.284089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.284402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.284434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.284584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.284614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.284763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.284794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.667 qpair failed and we were unable to recover it. 00:30:35.667 [2024-07-16 00:56:53.285033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.667 [2024-07-16 00:56:53.285064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.285317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.285349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.285675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.285706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.285934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.285965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.286287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.286320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.286534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.286565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.286782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.286813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.287112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.287143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.287386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.287420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.287769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.287801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.288077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.288108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.288361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.288394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.288668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.288699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.289005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.289036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.289330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.289362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.289596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.289627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.289929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.289960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.290238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.290278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.290596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.290627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.290900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.290932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.291141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.291172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.291490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.291522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.291737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.291768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.292011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.292043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.292248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.292297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.292589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.292620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.292919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.292950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.293264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.293297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.293513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.293545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.293767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.293797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.294092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.294123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.294400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.294432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.294734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.294765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.295061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.295093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.295423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.295455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.295756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.295799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.296085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.296116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.296420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.296452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.296746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.296777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.297050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.297082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.297360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.297392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.297714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.297745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.298045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.298077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.298378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.298410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.298654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.298686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.299017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.299049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.299354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.299387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.299682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.299713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.299947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.300233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.300284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.300595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.300627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.300930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.300962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.301203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.301234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.301543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.301576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.301804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.301835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.302065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.302096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.302401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.302434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.302659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.302691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.302943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.302974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.303297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.303330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.303638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.303668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.303962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.303994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.668 qpair failed and we were unable to recover it. 00:30:35.668 [2024-07-16 00:56:53.304302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.668 [2024-07-16 00:56:53.304340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.304637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.304668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.304901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.304932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.305234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.305275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.305495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.305526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.305735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.305766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.306047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.306078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.306394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.306426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.306712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.306743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.306953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.306984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.307126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.307158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.307490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.307522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.307850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.307882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.308092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.308419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.308453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.308757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.308788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.309085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.309117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.309422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.309676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.309707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.309856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.309887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.310103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.310135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.310464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.310496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.310705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.310736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.311036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.311068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.311204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.311235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.311632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.311665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.311996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.312028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.312332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.312366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.312608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.312639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.312869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.313206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.313237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.313543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.313575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.313853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.313884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.314198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.314229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.314552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.314584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.314762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.314811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.315021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.315052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.315349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.315382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.315593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.315875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.316148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.316179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.316528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.316567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.316780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.316811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.317106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.317138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.317442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.317474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.317796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.317827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.318131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.318163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.318399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.318431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.318712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.318743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.318969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.319001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.319305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.319338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.319632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.319664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.319870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.319901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.320185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.320215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.320463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.320495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.320837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.320870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.321184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.321216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.321452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.321485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.321785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.669 [2024-07-16 00:56:53.321818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.669 qpair failed and we were unable to recover it. 00:30:35.669 [2024-07-16 00:56:53.322056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.322087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.322294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.322327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.322539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.322569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.322875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.322907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.323154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.323185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.323508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.323541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.323766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.323797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.324108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.324139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.324310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.324354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.324567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.324599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.324814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.324846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.325154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.325186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.325418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.325451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.325734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.325766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.326074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.326106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.326399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.326431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.326767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.326798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.327128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.327160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.327334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.327366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.327579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.327610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.327895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.327926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.328237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.328287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.328546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.328577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.328735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.328768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.328982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.329013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.329164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.329196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.329491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.329523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.329753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.329785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.330068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.330100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.330439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.330472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.330782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.330814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.331025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.331056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.331230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.331271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.331442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.331474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.331783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.331814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.332126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.332158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.332456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.332489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.332791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.332822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.333122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.333154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.333402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.333688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.333720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.334033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.334065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.334387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.334420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.334685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.334717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.334937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.334968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.335146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.335178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.335474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.335506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.335791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.335823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.336141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.336172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.336488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.336520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.336808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.336848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.337154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.337186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.670 [2024-07-16 00:56:53.337477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.670 [2024-07-16 00:56:53.337509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.670 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.337818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.337850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.338141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.338175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.338414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.338448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.338734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.338766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.339056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.339089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.339419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.339451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.339684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.339716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.339931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.339964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.340182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.340214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.340511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.340543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.340808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.340840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.341163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.341549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.341582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.341893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.341925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.342218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.342250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.342487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.342519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.342735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.342767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.343076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.343108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.343279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.343312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.343629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.343661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.343944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.343976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.344141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.344172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.344535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.344607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.344949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.344985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.345288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.345310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.345622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.345653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.345954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.345987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.346277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.346310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.346624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.346655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.346965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.346997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.347335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.347368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.347618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.347649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.347875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.347906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.348120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.348141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.348373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.348393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.348658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.348700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.348986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.349018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.349351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.349383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.349702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.349734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.350018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.350061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.350370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.350403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.350713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.350744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.351038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.351070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.351382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.351414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.351633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.351664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.351975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.351995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.352226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.352246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.352405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.352426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.352739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.352770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.353074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.353106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.353436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.353468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.353706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.353743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.354051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.354082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.354306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.354338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.354566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.354599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.354915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.354946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.355268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.355289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.671 qpair failed and we were unable to recover it. 00:30:35.671 [2024-07-16 00:56:53.355629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.671 [2024-07-16 00:56:53.355661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.355974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.356006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.356219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.356252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.356556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.356599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.356887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.356932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.357172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.357203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.357427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.357460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.357775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.357807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.358052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.358084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.358264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.358297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.358534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.358566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.358908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.358940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.359173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.359205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.359528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.359561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.359858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.359889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.360197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.360217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.360516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.360536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.360852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.360872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.361009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.361031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.361280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.361301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.361532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.361553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.361840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.361861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.362122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.362154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.362457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.362490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.362736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.362767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.363106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.363137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.363451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.363483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.363805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.363836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.364170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.364202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.364455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.364489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.364720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.364752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.365046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.365066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.365377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.365398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.365713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.365744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.366080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.366125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.366328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.366349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.366558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.366579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.366866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.366887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.367032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.367065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.367380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.367412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.367641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.367673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.367834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.367866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.368192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.368223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.368496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.368529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.368773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.368805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.369162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.369194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.369508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.369540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.369754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.369785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.370086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.370107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.370368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.370408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.370752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.370784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.370943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.370975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.371188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.371209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.371397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.371418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.371714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.371745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.371982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.372013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.372227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.372247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.372580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.372612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.372916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.672 [2024-07-16 00:56:53.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.672 qpair failed and we were unable to recover it. 00:30:35.672 [2024-07-16 00:56:53.373174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.373206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.373512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.373544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.373843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.373874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.374177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.374209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.374464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.374496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.374828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.374859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.375125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.375157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.375475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.375507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.375793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.375824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.376141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.376172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.376486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.376518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.376833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.376864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.377178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.377209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.377528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.377561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.377715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.377747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.378045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.378082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.378412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.378444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.378696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.378727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.378968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.378999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.379215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.379246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.379470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.379503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.379813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.379844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.380058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.380090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.380392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.380424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.380746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.380778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.381065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.381097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.381273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.381305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.381620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.381654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.381895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.381926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.382215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.382247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.382562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.382594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.382886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.383156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.383177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.383478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.383510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.383741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.383773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.384027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.384059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.384297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.384330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.384585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.384626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.384945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.384989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.385201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.385222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.385460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.385481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.385714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.385735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.386004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.386037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.386297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.386329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.386572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.386603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.386947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.386989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.387270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.387291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.387585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.387617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.387942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.387974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.388232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.388269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.388599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.388631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.388945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.673 [2024-07-16 00:56:53.388977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.673 qpair failed and we were unable to recover it. 00:30:35.673 [2024-07-16 00:56:53.389270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.389291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.389461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.389481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.389689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.389721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.389976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.390005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.390221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.390241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.390486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.390508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.390820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.390840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.391044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.391064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.391271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.391291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.391520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.391540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.391816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.391847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.392002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.392033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.392342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.392363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.392669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.392701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.393023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.393055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.393309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.393341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.393576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.393609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.393973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.394004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.394241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.394281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.394565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.394596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.394909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.394941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.395233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.395273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.395509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.395540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.395829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.395861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.396175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.396196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.396398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.396419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.396685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.396705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.396912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.396932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.397159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.397179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.397390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.397411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.397606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.397627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.397891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.397911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.398228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.398269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.398564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.398595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.398899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.398930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.399266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.399298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.399611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.399642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.399868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.399900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.400191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.400223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.400539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.400571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.400861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.400893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.401073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.401105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.401317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.401338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.401563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.401588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.401896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.401916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.402188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.402220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.402571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.402603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.402919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.402951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.403236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.403261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.403566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.403598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.403828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.403860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.404168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.404200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.404424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.404457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.404770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.404802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.405114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.405145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.405440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.405474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.405695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.405727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.405984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.406016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.674 qpair failed and we were unable to recover it. 00:30:35.674 [2024-07-16 00:56:53.406287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.674 [2024-07-16 00:56:53.406319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.406533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.406565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.406779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.406810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.407036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.407067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.407381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.407414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.407711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.407743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.408082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.408114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.408330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.408367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.408655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.408676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.408879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.408921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.409231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.409290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.409606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.409639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.409926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.409958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.410175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.410207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.410534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.410566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.410911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.410943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.411173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.411193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.411412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.411432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.411629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.411649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.411854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.411875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.412136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.412157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.412391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.412411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.412615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.412647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.412912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.412944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.413159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.413191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.413506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.413544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.413855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.413887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.414209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.414241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.414482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.414514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.414726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.414757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.414991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.415012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.415276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.415297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.415502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.415523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.415735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.415755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.415883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.415903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.416127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.416147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.416350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.416382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.416595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.416627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.416890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.416921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.417275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.417308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.417557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.417589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.417925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.417956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.418191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.418223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.418551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.418583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.418798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.418830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.419127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.419158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.419499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.419531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.419795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.419826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.420162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.420194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.420380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.420412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.420722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.420753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.421072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.421103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.421421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.421454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.421742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.421773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.422084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.422116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.422420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.422452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.422786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.422817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.423057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.423089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.423388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.423409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.423616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.675 [2024-07-16 00:56:53.423637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.675 qpair failed and we were unable to recover it. 00:30:35.675 [2024-07-16 00:56:53.423821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.423841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.424119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.424487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.424520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.424849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.424879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.425086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.425106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.425394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.425418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.425718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.425750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.425986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.426017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.426336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.426369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.426684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.426716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.426893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.426925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.427210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.427242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.427607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.427639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.427952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.427983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.428280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.428302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.428587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.428608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.428800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.428821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.429080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.429100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.429395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.429416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.429737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.429757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.430050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.430081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.430410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.430442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.430754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.431019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.431051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.431337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.431370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.431612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.431643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.431865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.431897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.432112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.432143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.432453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.432474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.432686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.432706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.432975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.433006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.433236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.433276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.433537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.433570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.433802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.433834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.434069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.434101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.434337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.434369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.434623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.434644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.434835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.434856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.435079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.435100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.435218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.435249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.435471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.435503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.435788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.435819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.436103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.436141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.436360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.436393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.436677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.436708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.436848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.436885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.437198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.437229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.437549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.437581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.437898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.437930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.438244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.438296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.438606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.676 [2024-07-16 00:56:53.438949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.676 [2024-07-16 00:56:53.438981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.676 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.439275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.439308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.439616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.439962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.439994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.440316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.440350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.440604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.440636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.440977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.441009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.441322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.441355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.441674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.441706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.442012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.442044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.442280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.442312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.442627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.442659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.442957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.442988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.443305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.443338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.443650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.443682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.444022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.444054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.444346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.444378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.444720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.444751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.445003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.445024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.445301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.445323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.445577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.445598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.445731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.445752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.446010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.446030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.446330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.446364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.446652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.446683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.446946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.446991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.447261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.447282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.447517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.447537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.447864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.447896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.448192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.448223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.448527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.448547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.448775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.448796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.448997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.449017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.449219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.449240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.449549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.449591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.449907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.449939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.450242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.450271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.450512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.450533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.450729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.450750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.450960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.450980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.451237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.451266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.451496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.451517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.451705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.451726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.452028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.452060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.452394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.452428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.452691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.452723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.453008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.453040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.453294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.453327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.453596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.453628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.453903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.453945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.454224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.454456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.454478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.454744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.454764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.455071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.455102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.455416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.455449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.455681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.677 [2024-07-16 00:56:53.455713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.677 qpair failed and we were unable to recover it. 00:30:35.677 [2024-07-16 00:56:53.456002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.456022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.456336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.456357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.456695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.456727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.457015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.457047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.457276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.457316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.457515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.457537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.457771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.457792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.458051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.458071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.458388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.458420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.458644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.458676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.458818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.458850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.459109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.459142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.459358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.459391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.459623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.459655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.459893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.459925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.460135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.460167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.460496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.460517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.460773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.460794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.461094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.461131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.461447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.461479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.461665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.461698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.462012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.462043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.462334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.462366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.462595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.462626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.462841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.462873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.463182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.463223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.463588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.463620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.463942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.463962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.464135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.464155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.464448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.464469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.464715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.464747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.465053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.465087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.465381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.465403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.465623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.465643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.465946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.465966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.466232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.466253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.466585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.466617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.466923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.466956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.467183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.467215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.467537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.467570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.467812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.467844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.468160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.468192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.468519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.468562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.468775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.468795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.469089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.469110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.469376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.469398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.469521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.469541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.469817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.470068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.470099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.470362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.470382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.470591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.470611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.470825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.470856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.471069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.471101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.471390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.471423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.471751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.471771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.472069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.472101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.472330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.472362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.472595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.472627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.678 [2024-07-16 00:56:53.472850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.678 [2024-07-16 00:56:53.472887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.678 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.473195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.473227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.473561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.473581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.473778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.473810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.474036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.474067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.474315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.474348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.474648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.474680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.474844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.474875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.475184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.475204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.475494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.475515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.475848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.475879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.476154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.476186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.476514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.476535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.476668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.476688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.477001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.477033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.477333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.477354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.477647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.477690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.477901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.477932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.478157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.478189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.478496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.478517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.478706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.478727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.479021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.479042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.479329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.479350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.479651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.479683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.479916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.479947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.480275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.480308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.480631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.480663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.480895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.480927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.481268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.481301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.481440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.481461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.481726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.481757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.481979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.482011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.482250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.482281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.482576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.482611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.482923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.482955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.483250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.483311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.483550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.483582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.483904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.483935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.484181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.484212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.484566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.484598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.484818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.484850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.485090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.485121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.485434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.485454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.485682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.485702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.485964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.485985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.486278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.486311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.486638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.486670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.486960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.486991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.487309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.487353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.487626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.487647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.487855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.679 [2024-07-16 00:56:53.487876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.679 qpair failed and we were unable to recover it. 00:30:35.679 [2024-07-16 00:56:53.488173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.488205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.488486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.488520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.488756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.488788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.489103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.489135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.489440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.489460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.489667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.489687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.489888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.489908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.490123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.490144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.490347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.490368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.490604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.490623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.490741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.490760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.491039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.491060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.491195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.491216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.491387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.491408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.491674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.491696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.492033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.492065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.492297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.492337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.492584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.492616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.680 [2024-07-16 00:56:53.492904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.680 [2024-07-16 00:56:53.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.680 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.493220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.493269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.493604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.493637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.493914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.493946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.494287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.494320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.494487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.494519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.494794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.494826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.495058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.495090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.495363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.495395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.495712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.495744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.496071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.496103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.496336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.964 [2024-07-16 00:56:53.496368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.964 qpair failed and we were unable to recover it. 00:30:35.964 [2024-07-16 00:56:53.496618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.496651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.496811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.496844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.497130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.497162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.497380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.497401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.497603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.497635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.497935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.497967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.498285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.498318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.498637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.498669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.498911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.498943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.499070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.499102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.499341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.499384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.499692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.499724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.499962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.499994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.500343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.500693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.500725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.501041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.501086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.501334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.501354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.501622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.501643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.501856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.501876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.502162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.502182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.502418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.502702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.502722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.503033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.503066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.503223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.503265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.503419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.503451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.503806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.503839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.504149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.965 [2024-07-16 00:56:53.504186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.965 qpair failed and we were unable to recover it. 00:30:35.965 [2024-07-16 00:56:53.504423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.504456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.504759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.504791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.505113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.505145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.505459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.505491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.505817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.505849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.506089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.506122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.506272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.506293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.506581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.506613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.506900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.506932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.507156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.507189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.507404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.507426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.507648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.507680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.507905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.507937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.508233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.508275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.508451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.508483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.508691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.508712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.509020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.509052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.509270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.509292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.509469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.509489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.509713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.509733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.510020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.510041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.510236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.510280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.510599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.510631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.510894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.510925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.511213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.511245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.511591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.511624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.511883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.511915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.512160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.512192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.512434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.512475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.512813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.512844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.513168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.513201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.513497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.513530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.513794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.513825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.514042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.966 [2024-07-16 00:56:53.514074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.966 qpair failed and we were unable to recover it. 00:30:35.966 [2024-07-16 00:56:53.514388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.514409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.514671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.514713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.514944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.514975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.515291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.515324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.515622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.515643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.515951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.515974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.516095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.516308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.516329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.516632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.516665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.516837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.516868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.517086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.517117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.517332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.517353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.517497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.517529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.517784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.517816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.518156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.518188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.518494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.518538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.518844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.518876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.519168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.519200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.519470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.519492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.519813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.519845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.520072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.520103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.520430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.520465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.520594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.520614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.520931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.520952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.521185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.521206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.521435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.521456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.521642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.521662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.521975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.521995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.522305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.522327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.522589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.522610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.522874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.522895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.523184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.523222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.523566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.523598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.523828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.523860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.524141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.524161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.524363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.524384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.524582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.524603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.524915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.524935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.525159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.525190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.525404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.525436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.967 qpair failed and we were unable to recover it. 00:30:35.967 [2024-07-16 00:56:53.525741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.967 [2024-07-16 00:56:53.525773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.525992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.526024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.526306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.526326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.526613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.526633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.526853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.526874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.527082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.527106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.527305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.527325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.527635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.527667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.527880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.527912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.528139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.528160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.528346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.528371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.528641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.528673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.528975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.529007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.529167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.529199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.529529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.529561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.529791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.529822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.530144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.530176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.530482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.530515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.530815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.530846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.531151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.531183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.531357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.531390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.531619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.531639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.531943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.531975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.532216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.532251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.532624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.532644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.532929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.533286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.533308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.533602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.533635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.533958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.533989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.534299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.534331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.534629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.534661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.534967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.534999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.535295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.535328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.535639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.535672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.535903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.535935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.536107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.536139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.536430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.536451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.536735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.536755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.536994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.537025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.537280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.537313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.537492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.968 [2024-07-16 00:56:53.537524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.968 qpair failed and we were unable to recover it. 00:30:35.968 [2024-07-16 00:56:53.537737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.537768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.538092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.538124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.538291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.538324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.538543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.538575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.538886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.538933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.539245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.539290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.539618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.539651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.539888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.539919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.540078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.540110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.540347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.540379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.540629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.540649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.540763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.540784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.541073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.541093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.541383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.541404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.541625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.541646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.541908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.541929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.542238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.542278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.542583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.542614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.542939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.542971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.543287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.543309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.543509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.543530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.543660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.543681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.543953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.543973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.544281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.544313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.544476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.544508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.544738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.544769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.545056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.545088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.545429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.545450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.545758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.545778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.546091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.546112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.546311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.546333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.546524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.546544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.546843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.969 [2024-07-16 00:56:53.546874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.969 qpair failed and we were unable to recover it. 00:30:35.969 [2024-07-16 00:56:53.547190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.547222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.547555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.547576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.547788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.547809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.548009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.548029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.548215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.548236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.548476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.548497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.548627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.548659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.548913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.548945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.549195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.549226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.549512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.549544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.549863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.549895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.550131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.550167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.550501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.550523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.550802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.550833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.551171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.551203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.551504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.551546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.551758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.551789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.552005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.552037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.552275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.552308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.552542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.552575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.552862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.552894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.553107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.553139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.553453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.553473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.553763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.553783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.554030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.554050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.554296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.554329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.554639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.554671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.554983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.555015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.555288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.555321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.555637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.555668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.555986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.556018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.556310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.556343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.556555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.556587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.556808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.556839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.557128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.557443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.557476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.557701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.557733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.557970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.558003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.558271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.558305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.558545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.558577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.970 [2024-07-16 00:56:53.558811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.970 [2024-07-16 00:56:53.558842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.970 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.559023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.559067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.559366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.559399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.559631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.559662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.559964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.559995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.560297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.560318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.560592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.560624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.560950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.560982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.561229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.561271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.561520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.561551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.561834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.561854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.562152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.562189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.562520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.562553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.562868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.562900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.563217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.563248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.563571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.563591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.563856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.563898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.564212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.564244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.564507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.564539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.564758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.564790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.565105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.565136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.565428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.565461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.565773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.565805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.565964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.565995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.566318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.566350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.566594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.566626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.566778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.566809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.567125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.567170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.567457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.567478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.567738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.567758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.567898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.567918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.568209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.568240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.568407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.568428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.568659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.568679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.568962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.568983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.569132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.569152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.569453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.569474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.569739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.569776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.570125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.570157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.570470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.570503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.570735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.971 [2024-07-16 00:56:53.570756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.971 qpair failed and we were unable to recover it. 00:30:35.971 [2024-07-16 00:56:53.570973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.571006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.571294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.571328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.571563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.571594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.571881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.571914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.572227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.572269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.572537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.572569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.572785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.572817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.573112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.573143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.573416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.573450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.573770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.573802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.574152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.574532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.574553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.574866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.574898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.575128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.575160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.575484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.575505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.575767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.575787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.576074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.576105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.576336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.576368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.576681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.576713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.576897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.576929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.577244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.577289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.577604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.577635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.577794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.577826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.578123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.578155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.578446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.578468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.578788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.578821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.579039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.579071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.579366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.579399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.579703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.579735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.579988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.580019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.580262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.580294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.580480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.580511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.580795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.580815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.581116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.581136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.581414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.581446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.581735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.581767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.582108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.582140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.582459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.582480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.582745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.582765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.583058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.583090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.972 qpair failed and we were unable to recover it. 00:30:35.972 [2024-07-16 00:56:53.583423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.972 [2024-07-16 00:56:53.583455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.583745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.583777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.584032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.584064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.584439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.584472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.584619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.584649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.584887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.584919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.585141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.585463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.585484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.585711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.585732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.585994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.586026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.586349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.586386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.586674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.586695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.586884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.586905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.587090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.587110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.587339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.587371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.587671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.587703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.587951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.587983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.588218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.588251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.588481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.588513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.588722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.588754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.588984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.589015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.589272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.589305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.589643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.589675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.589932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.589964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.590296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.590330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.590580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.590611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.590832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.590854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.591055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.591075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.591362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.591384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.591592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.591612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.591900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.591920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.592191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.592212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.592444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.592464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.592656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.592688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.592923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.973 [2024-07-16 00:56:53.592954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.973 qpair failed and we were unable to recover it. 00:30:35.973 [2024-07-16 00:56:53.593201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.593233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.593475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.593518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.593808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.593829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.594033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.594054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.594349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.594381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.594620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.594653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.594905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.594936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.595146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.595178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.595464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.595497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.595721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.595752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.596018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.596050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.596312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.596345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.596567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.596599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.596774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.596806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.597123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.597154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.597469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.597507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.597663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.597682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.597945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.597976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.598276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.598309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.598524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.598544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.598729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.598749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.598959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.598991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.599226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.599285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.599501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.599533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.599795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.599826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.600112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.600144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.600305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.600338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.600626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.600658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.600899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.600931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.601241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.601282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.601510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.601542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.601830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.601861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.602079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.602111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.602454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.602487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.602711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.602743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.603001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.603032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.603337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.603370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.603693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.603725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.604040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.604072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.604309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.604343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.974 qpair failed and we were unable to recover it. 00:30:35.974 [2024-07-16 00:56:53.604577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.974 [2024-07-16 00:56:53.604597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.604858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.604879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.605091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.605123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.605390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.605422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.605736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.605756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.606065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.606098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.606406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.606440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.606674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.606706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.606915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.606947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.607173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.607205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.607533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.607565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.607894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.607914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.608220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.608241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.608462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.608494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.608738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.608770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.609057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.609093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.609247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.609289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.609601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.609641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.609938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.609970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.610188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.610221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.610573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.610606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.610819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.610852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.611085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.611117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.611353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.611386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.611684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.611715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.611925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.611957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.612277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.612310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.612646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.612678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.612989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.613021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.613294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.613327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.613605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.613637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.613960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.613992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.614242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.614283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.614497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.614843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.614875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.615120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.615480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.615513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.615814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.615835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.616102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.616134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.616485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.616518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.975 [2024-07-16 00:56:53.616833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.975 [2024-07-16 00:56:53.616864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.975 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.617154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.617186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.617533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.617566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.617752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.617784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.618104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.618135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.618422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.618455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.618773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.618805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.619055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.619087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.619409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.619430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.619722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.619753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.619989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.620020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.620312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.620344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.620659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.620691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.620899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.620920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.621111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.621132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.621431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.621469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.621806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.621838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.622150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.622182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.622410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.622443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.622729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.622760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.623075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.623106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.623286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.623319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.623606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.623638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.623873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.623905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.624217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.624249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.624589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.624622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.624911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.624942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.625240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.625284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.625442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.625474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.625776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.625808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.626105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.626138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.626372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.626405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.626731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.627039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.627070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.627349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.627369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.627576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.627596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.627786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.627806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.628067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.628087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.628280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.628301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.628559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.628598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.628883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.976 [2024-07-16 00:56:53.628915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.976 qpair failed and we were unable to recover it. 00:30:35.976 [2024-07-16 00:56:53.629151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.629183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.629591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.629670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.630006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.630042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.630284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.630319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.630622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.630660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.630968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.631000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.631177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.631209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.631507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.631539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.631847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.631879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.632208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.632240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.632482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.632515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.632734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.632766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.632977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.633007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.633347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.633379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.633715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.633746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.634077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.634109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.634422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.634776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.634807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.635090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.635123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.635439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.635471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.635633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.635664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.635947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.635979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.636209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.636242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.636596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.636628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.636840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.636872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.637114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.637147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.637484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.637516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.637843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.637873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.638188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.638227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.638475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.638507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.638801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.638833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.639045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.639075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.639378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.639412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.639636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.639667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.639908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.639939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.640247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.640293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.640606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.640637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.640781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.640810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.641042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.641074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.641389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.641421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.641713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.977 [2024-07-16 00:56:53.641745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.977 qpair failed and we were unable to recover it. 00:30:35.977 [2024-07-16 00:56:53.641975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.642006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.642240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.642279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.642596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.642627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.642942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.642973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.643195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.643227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.643548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.643580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.643796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.643829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.643974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb9070 is same with the state(5) to be set 00:30:35.978 [2024-07-16 00:56:53.644319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.644350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.644647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.644682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.644917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.644949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.645298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.645333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.645623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.645655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.645993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.646025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.646342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.646375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.646696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.646730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.647044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.647077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.647370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.647403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.647714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.647746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.648038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.648070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.648323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.648355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.648649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.648681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.648898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.648919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.649185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.649227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.649524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.649557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.649858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.649890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.650186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.650218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.650529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.650563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.650775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.650814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.651133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.651165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.651393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.651426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.651794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.651826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.652111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.652143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.652464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.652497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.652783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.652815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.978 [2024-07-16 00:56:53.653167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.978 [2024-07-16 00:56:53.653198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.978 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.653516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.653548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.653878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.653910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.654141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.654173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.654419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.654451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.654737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.654769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.655051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.655072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.655284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.655305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.655508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.655528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.655731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.655764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.656048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.656080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.656335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.656368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.656688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.656721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.656936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.656968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.657201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.657233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.657486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.657518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.657802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.657840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.658152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.658183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.658484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.658517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.658742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.658774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.659068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.659088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.659281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.659302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.659505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.659525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.659812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.660043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.660064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.660331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.660376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.660535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.660566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.660869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.660902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.661077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.661109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.661322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.661355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.661684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.661715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.661969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.662001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.662232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.662272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.662583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.662640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.662853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.662885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.663202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.663234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.663561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.663594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.663825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.663846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.664109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.664130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.664269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.664290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.664494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.979 [2024-07-16 00:56:53.664514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.979 qpair failed and we were unable to recover it. 00:30:35.979 [2024-07-16 00:56:53.664711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.664731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.664959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.664991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.665320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.665354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.665602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.665634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.665982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.666013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.666248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.666289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.666527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.666558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.666710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.666731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.666877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.666898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.667139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.667161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.667422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.667443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.667722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.667754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.668011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.668043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.668372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.668405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.668640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.668672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.668989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.669021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.669234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.669437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.669470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.669731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.669762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.670084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.670105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.670301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.670322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.670522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.670542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.670825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.670863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.671207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.671239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.671424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.671456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.671700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.671732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.671955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.671975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.672175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.672207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.672536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.672569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.672898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.672929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.673241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.673282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.673601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.673633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.673925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.673963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.674300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.674333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.674659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.674692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.675003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.675035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.675275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.675308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.675521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.675541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.675733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.675765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.676080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.676112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.980 [2024-07-16 00:56:53.676384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.980 [2024-07-16 00:56:53.676416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.980 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.676754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.676786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.677017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.677049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.677279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.677313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.677567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.677587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.677888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.677920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.678162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.678194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.678515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.678548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.678854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.678885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.679188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.679219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.679544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.679577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.679892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.679913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.680234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.680276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.680571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.680603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.680899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.680920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.681216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.681246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.681576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.681609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.681898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.681929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.682245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.682288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.682620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.682653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.682964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.682995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.683289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.683322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.683633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.683664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.683958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.683989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.684303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.684336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.684677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.684709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.685025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.685055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.685236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.685278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.685595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.685628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.685930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.685961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.686297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.686330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.686638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.686659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.686861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.686885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.687160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.687180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.687482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.687515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.687830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.687862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.688131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.688161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.688498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.688531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.688840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.688860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.689070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.689091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.689356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.981 [2024-07-16 00:56:53.689399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.981 qpair failed and we were unable to recover it. 00:30:35.981 [2024-07-16 00:56:53.689730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.690023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.690054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.690396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.690417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.690718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.690750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.691002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.691033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.691295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.691328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.691614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.691646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.691957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.691978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.692304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.692338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.692628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.692660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.692954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.692986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.693298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.693331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.693620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.693641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.693941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.693974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.694293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.694326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.694678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.694709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.694932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.694964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.695285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.695318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.695655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.695688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.696000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.696032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.696357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.696414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.696707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.696747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.697032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.697064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.697396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.697429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.697742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.697774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.698017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.698049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.698315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.698348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.698663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.698685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.698991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.699023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.699337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.699369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.699685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.699718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.700005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.700048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.700293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.700326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.700594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.700625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.700958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.982 [2024-07-16 00:56:53.700995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.982 qpair failed and we were unable to recover it. 00:30:35.982 [2024-07-16 00:56:53.701310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.701341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.701553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.701586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.701818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.701849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.702166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.702198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.702518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.702551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.702831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.702862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.703150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.703181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.703419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.703452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.703753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.703784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.704088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.704120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.704420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.704453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.704661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.704682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.704963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.704994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.705279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.705312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.705537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.705569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.705829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.705860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.706145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.706176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.706398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.706430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.706715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.706747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.707042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.707073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.707384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.707416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.707712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.707744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.707912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.707951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.708166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.708187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.708497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.708517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.708761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.709043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.709064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.709406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.709427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.709711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.709731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.709996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.710016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.710281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.710302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.710592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.710624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.710880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.710912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.711270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.711303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.711536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.711568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.711850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.711871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.712166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.712203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.712463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.712496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.712842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.712874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.713105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.713136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.983 qpair failed and we were unable to recover it. 00:30:35.983 [2024-07-16 00:56:53.713450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.983 [2024-07-16 00:56:53.713482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.713790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.713821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.714032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.714053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.714316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.714338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.714599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.714620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.714918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.714938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.715166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.715186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.715484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.715517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.715664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.715695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.715932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.715963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.716281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.716314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.716551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.716834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.716866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.717165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.717196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.717353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.717385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.717603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.717635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.717869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.717890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.718967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.718988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.719275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.719297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.719623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.719654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.719886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.719917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.720149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.720180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.720499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.720532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.720807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.720827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.721098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.721118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.721406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.721427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.721686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.721717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.722018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.722038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.722287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.722309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.722514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.722535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.722804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.722849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.723152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.723189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.723475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.723507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.723843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.723875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.724159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.724190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.724418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.984 [2024-07-16 00:56:53.724450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.984 qpair failed and we were unable to recover it. 00:30:35.984 [2024-07-16 00:56:53.724596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.724628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.724944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.724976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.725286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.725318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.725635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.725666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.725925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.725944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.726204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.726223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.726457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.726477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.726670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.726690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.726982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.727013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.727306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.727339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.727650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.727684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.727924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.727956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.728245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.728287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.728593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.728635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.728908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.728947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.729246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.729301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.729553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.729585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.729907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.729938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.730161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.730193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.730422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.730454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.730764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.730796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.731022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.731183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.731214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.731450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.731482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.731796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.731827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.732142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.732161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.732468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.732501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.732754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.732784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.733102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.733122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.733323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.733344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.733614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.733634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.733824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.733843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.734134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.734155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.734380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.734402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.734601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.734621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.734910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.734947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.735192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.735223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.735388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.735419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.735628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.735659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.735938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.735970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.985 [2024-07-16 00:56:53.736271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.985 [2024-07-16 00:56:53.736305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.985 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.736466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.736500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.736752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.736784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.736988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.737009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.737322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.737342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.737620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.737641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.737933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.737964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.738281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.738313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.738605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.738638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.738934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.738954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.739280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.739312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.739537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.739569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.739896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.739917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.740155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.740176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.740429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.740450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.740575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.740597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.740805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.740825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.741009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.741030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.741280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.741313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.741628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.741662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.741901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.741932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.742237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.742296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.742618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.742650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.742905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.742937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.743226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.743271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.743586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.743618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.743930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.743962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.744243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.744289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.744536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.744567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.744892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.744924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.745233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.745277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.745444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.745476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.745706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.745737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.746055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.746087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.746245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.986 [2024-07-16 00:56:53.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.986 qpair failed and we were unable to recover it. 00:30:35.986 [2024-07-16 00:56:53.746460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.746484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.746773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.746793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.747009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.747029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.747280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.747301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.747472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.747493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.747637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.747657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.747960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.747992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.748245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.748285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.748625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.748657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.748918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.748950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.749270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.749303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.749635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.749667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.749991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.750011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.750338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.750358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.750569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.750589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.750782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.750803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.751037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.751058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.751280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.751312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.751613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.751658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.751877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.751897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.752171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.752191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.752458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.752479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.752670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.752691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.752925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.752956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.753267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.753300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.753602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.753635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.753906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.753927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.754250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.754298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.754533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.754565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.754856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.754886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.755194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.755214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.755538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.755558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.755812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.755844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.756109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.756141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.756299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.756331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.756647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.756678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.756893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.756913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.757207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.757227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.757499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.757520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.757745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.757765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.987 [2024-07-16 00:56:53.758057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.987 [2024-07-16 00:56:53.758090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.987 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.758421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.758453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.758667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.758699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.758960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.758992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.759309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.759330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.759619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.759639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.759781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.759926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.759948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.760238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.760279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.760520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.760553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.760905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.760936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.761314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.761335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.761647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.761678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.761973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.762004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.762231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.762272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.762617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.762649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.762949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.762980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.763304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.763337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.763649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.763680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.763970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.764016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.764325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.764357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.764649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.764681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.764989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.765021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.765171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.765203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.765508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.765541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.765792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.765823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.766155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.766498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.766537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.766827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.766859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.767077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.767108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.767273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.767306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.767520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.767553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.767870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.767901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.768201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.768221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.768538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.768559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.768766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.768787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.769052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.769084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.769428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.769461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.769772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.769805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.770107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.770139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.770475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.988 [2024-07-16 00:56:53.770507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.988 qpair failed and we were unable to recover it. 00:30:35.988 [2024-07-16 00:56:53.770867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.770888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.771141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.771161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.771459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.771491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.771816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.771847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.772109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.772141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.772428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.772460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.772613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.772644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.772930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.772962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.773206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.773226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.773524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.773543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.773805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.773822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.774039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.774067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.774382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.774412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.774710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.774738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.774947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.774977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.775302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.775332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.775544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.775572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.775827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.775856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.776081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.776109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.776421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.776452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.776687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.776717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.777031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.777061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.777406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.777437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.777745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.777776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.778073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.778329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.778362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.778594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.778632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.778821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.778842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.778964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.778984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.779287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.779319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.779603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.779635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.779908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.779928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.780117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.780137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.780322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.780343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.780550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.780571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.780774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.780794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.780983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.781003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.781269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.781315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.781552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.781584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.781822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.781843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.989 [2024-07-16 00:56:53.782059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.989 [2024-07-16 00:56:53.782080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.989 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.782371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.782393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.782528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.782547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.782831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.782851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.783060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.783093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.783390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.783424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.783711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.783743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.783897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.783929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.784213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.784233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.784512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.784532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.784734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.784755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.784963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.784995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.785230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.785251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.785540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.785561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.785723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.785742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.785890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.785910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.786189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.786221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.786502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.786534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.786892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.786923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.787236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.787280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.787444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.787477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:35.990 [2024-07-16 00:56:53.787774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:35.990 [2024-07-16 00:56:53.787805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:35.990 qpair failed and we were unable to recover it. 00:30:36.261 [2024-07-16 00:56:53.788122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.261 [2024-07-16 00:56:53.788156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.261 qpair failed and we were unable to recover it. 00:30:36.261 [2024-07-16 00:56:53.788450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.261 [2024-07-16 00:56:53.788485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.261 qpair failed and we were unable to recover it. 00:30:36.261 [2024-07-16 00:56:53.788710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.261 [2024-07-16 00:56:53.788742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.261 qpair failed and we were unable to recover it. 00:30:36.261 [2024-07-16 00:56:53.789049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.261 [2024-07-16 00:56:53.789070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.261 qpair failed and we were unable to recover it. 00:30:36.261 [2024-07-16 00:56:53.789386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.261 [2024-07-16 00:56:53.789410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.261 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.789638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.789670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.789832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.789864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.790122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.790153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.790478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.790799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.790831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.791089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.791121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.791391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.791423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.791745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.791777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.792069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.792101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.792416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.792449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.792623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.792875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.792921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.793200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.793539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.793751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.793771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.794059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.794079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.794345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.794732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.794764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.794977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.795009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.795337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.795370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.795686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.795717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.795951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.796236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.796265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.796563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.796595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.796830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.262 [2024-07-16 00:56:53.796861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.262 qpair failed and we were unable to recover it. 00:30:36.262 [2024-07-16 00:56:53.797202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.797233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.797567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.797600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.797915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.797947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.798239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.798283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.798614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.798646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.798872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.798903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.799072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.799104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.799417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.799449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.799663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.799695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.799979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.800013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.800229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.800249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.800586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.800618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.800870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.800902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.801230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.801271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.801483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.801520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.801810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.801841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.802144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.802164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.802435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.802481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.802803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.802836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.803144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.803177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.803476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.803509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.803802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.803833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.804047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.804080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.804401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.804433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.804602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.804633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.804943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.804975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.263 [2024-07-16 00:56:53.805211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.263 [2024-07-16 00:56:53.805243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.263 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.805472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.805504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.805661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.805692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.805998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.806031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.806248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.806275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.806580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.806611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.806822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.806853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.807145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.807177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.807522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.807555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.807783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.807814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.808028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.808061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.808375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.808408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.808704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.808736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.808985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.809017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.809299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.809320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.809593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.809639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.809794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.809825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.810125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.810147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.810408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.810450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.810778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.810810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.811038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.811071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.811411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.811443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.811754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.811786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.812114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.812145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.812458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.812490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.812713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.812746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.812958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.812989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.813323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.264 [2024-07-16 00:56:53.813344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.264 qpair failed and we were unable to recover it. 00:30:36.264 [2024-07-16 00:56:53.813592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.813617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.813898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.813919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.814188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.814220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.814550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.814582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.814796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.814817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.815061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.815345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.815377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.815691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.815723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.816051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.816083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.816353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.816386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.816703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.816734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.817054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.817075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.817309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.817342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.817497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.817529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.817827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.817859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.818085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.818117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.818429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.818462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.818731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.818762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.819082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.819113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.819422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.819454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.819769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.819800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.820091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.820122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.820436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.820469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.820761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.820793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.821050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.821082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.821350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.821382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.821721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.821753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.822016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.265 [2024-07-16 00:56:53.822048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.265 qpair failed and we were unable to recover it. 00:30:36.265 [2024-07-16 00:56:53.822298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.822331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.822623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.822656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.822968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.822999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.823216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.823248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.823548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.823580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.823897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.823928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.824213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.824244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.824565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.824597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.824812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.824843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.825097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.825412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.825434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.825697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.825718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.825921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.825946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.826075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.826096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.826362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.826383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.826677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.826698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.826913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.826933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.827121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.827142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.827361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.827393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.827609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.827641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.827933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.827953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.828169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.828189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.828405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.828426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.828638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.828658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.828896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.828929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.829237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.829297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.829619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.829652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.829942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.829975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.830293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.830326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.830633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.830664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.830962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.830993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.831275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.831307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.831626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.831658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.831910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.832290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.832323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.832535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.266 [2024-07-16 00:56:53.832566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.266 qpair failed and we were unable to recover it. 00:30:36.266 [2024-07-16 00:56:53.832807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.833155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.833186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.833496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.833529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.833793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.833825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.834156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.834177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.834405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.834426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.834724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.834757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.834989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.835022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.835285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.835317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.835539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.835571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.835863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.835895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.836127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.836147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.836356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.836388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.836546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.836578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.836893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.836925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.837202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.837223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.837495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.837520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.837722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.837743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.838002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.838039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.838287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.838319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.838603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.838635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.838857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.838889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.839207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.839240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.839512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.839545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.839862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.839894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.840216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.840247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.840570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.840591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.840892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.840923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.841236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.841278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.841534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.841566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.841789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.841821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.842049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.842069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.842269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.842289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.842550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.267 [2024-07-16 00:56:53.842571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.267 qpair failed and we were unable to recover it. 00:30:36.267 [2024-07-16 00:56:53.842702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.842722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.842918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.842949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.843271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.843303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.843616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.843648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.843985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.844016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.844321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.844354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.844681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.844713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.845026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.845058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.845287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.845319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.845662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.845694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.845846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.845879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.846201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.846222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.846436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.846458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.846671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.846691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.847000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.847033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.847337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.847370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.847668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.847700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.847936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.847968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.848194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.848214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.848404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.848424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.848619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.848639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.848904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.848936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.849244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.849292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.849639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.849671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.849957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.849989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.850308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.850341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.850616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.850977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.851008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.851269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.851290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.851482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.851502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.851796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.851828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.852131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.852163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.852453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.852486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.852772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.852804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.853119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.853151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.853466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.853498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.853819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.853851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.854122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.854153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.854441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.854474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.854791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.268 [2024-07-16 00:56:53.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.268 qpair failed and we were unable to recover it. 00:30:36.268 [2024-07-16 00:56:53.855039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.855071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.855374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.855406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.855720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.855751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.856044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.856076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.856398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.856430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.856760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.856791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.857046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.857079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.857396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.857430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.857659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.857690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.857935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.857968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.858309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.858342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.858517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.858547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.858855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.858887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.859186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.859218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.859529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.859562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.859872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.859904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.860124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.860352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.860374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.860676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.860708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.861022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.861053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.861285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.861592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.861612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.861806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.861829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.862062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.862093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.862406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.862439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.862762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.862795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.863081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.863113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.863447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.863468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.863747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.863768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.864047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.864079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.864400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.864432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.864648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.864680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.864905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.864937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.865220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.865241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.865561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.865582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.865885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.865906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.866200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.866232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.866556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.866589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.866928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.866959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.867272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.269 [2024-07-16 00:56:53.867314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.269 qpair failed and we were unable to recover it. 00:30:36.269 [2024-07-16 00:56:53.867630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.867662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.867967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.867999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.868303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.868337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.868571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.868603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.868914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.868946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.869234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.869261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.869524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.869544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.869844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.869876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.870130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.870172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.870387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.870409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.870601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.870621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.870829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.870849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.871108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.871363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.871396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.871689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.872026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.872058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.872360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.872393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.872625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.872657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.872867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.872899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.873110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.873141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.873474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.873494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.873770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.873790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.874062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.874086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.874293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.874314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.874500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.874521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.874809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.874840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.875184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.875215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.875539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.875572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.875885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.875917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.876196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.876229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.876485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.876506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.876694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.876714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.876910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.876942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.877194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.877225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.877553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.877586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.877747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.877779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.878070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.878090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.878291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.878313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.878619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.878650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.878937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.878969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.270 [2024-07-16 00:56:53.879287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.270 [2024-07-16 00:56:53.879320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.270 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.879553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.879584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.879869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.879900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.880213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.880245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.880441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.880474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.880761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.881039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.881079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.881312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.881345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.881586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.881618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.881921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.881953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.882296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.882329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.882588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.882621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.882936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.882969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.883277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.883310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.883590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.883621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.883877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.883909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.884243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.884269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.884576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.884608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.884916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.884947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.885244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.885285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.885522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.885554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.885782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.885813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.885976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.886014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.886226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.886271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.886488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.886508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.886767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.886807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.886971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.887003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.887317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.887350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.887664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.888021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.888053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.888383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.888415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.888672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.888704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.889021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.889053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.889344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.889377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.889687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.889719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.271 [2024-07-16 00:56:53.890010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.271 [2024-07-16 00:56:53.890041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.271 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.890292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.890326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.890645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.890677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.890989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.891021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.891231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.891251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.891534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.891566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.891878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.891909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.892207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.892238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.892486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.892518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.892824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.892856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.893155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.893187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.893489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.893510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.893801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.893820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.894061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.894081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.894383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.894416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.894730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.894941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.894974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.895190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.895223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.895446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.895467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.895660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.895680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.895899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.896124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.896156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.896472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.896505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.896795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.896827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.897139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.897464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.897497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.897810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.897842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.898136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.898174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.898476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.898497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.898804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.898835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.899088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.899119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.899456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.899488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.899799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.899830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.900121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.900142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.900372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.900393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.900660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.900706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.900924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.900957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.901270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.901293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.901590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.901610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.901739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.901759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.902051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.272 [2024-07-16 00:56:53.902072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.272 qpair failed and we were unable to recover it. 00:30:36.272 [2024-07-16 00:56:53.902400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.902422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.902719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.902752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.903071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.903103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.903366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.903399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.903724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.903754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.904068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.904098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.904391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.904459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.904726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.904758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.904967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.904999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.905148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.905169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.905456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.905478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.905607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.905629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.905852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.905883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.906192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.906216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.906448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.906470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.906692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.906713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.906941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.906962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.907151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.907172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.907472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.907506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.907819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.907851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.908135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.908167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.908453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.908485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.908740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.908771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.909082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.909114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.909395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.909428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.909651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.909683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.909987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.910019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.910340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.910372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.910683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.910715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.910938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.910970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.911276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.911308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.911631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.911663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.911899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.911931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.912269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.912302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.912633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.912665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.912984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.913016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.913272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.913293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.913557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.913578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.913789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.913809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.914105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.914137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.273 [2024-07-16 00:56:53.914299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.273 [2024-07-16 00:56:53.914332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.273 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.914572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.914605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.914754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.914786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.914993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.915025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.915359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.915380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.915588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.915609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.915808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.915829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.915971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.915991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.916198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.916230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.916564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.916597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.916926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.916958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.917218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.917249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.917577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.917598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.917884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.917907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.918238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.918294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.918559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.918591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.918893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.918925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.919227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.919268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.919562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.919593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.919840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.919871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.920187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.920218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.920567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.920600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.920852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.920884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.921112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.921144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.921362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.921383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.921671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.921692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.921919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.921940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.922130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.922151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.922369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.922389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.922699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.922731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.922983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.923014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.923352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.923384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.923700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.923731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.924042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.924074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.924310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.924342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.924656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.924677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.924970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.925002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.925228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.925270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.925607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.925627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.925909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.925928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.926296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.274 [2024-07-16 00:56:53.926329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.274 qpair failed and we were unable to recover it. 00:30:36.274 [2024-07-16 00:56:53.926647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.926679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.926965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.926986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.927308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.927341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.927581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.927614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.927931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.928166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.928198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.928430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.928451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.928570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.928591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.928859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.928891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.929202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.929234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.929517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.929549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.929759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.929791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.930014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.930052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.930299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.930331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.930557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.930589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.930836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.930867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.931209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.931242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.931468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.931500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.931675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.931708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.932021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.932053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.932367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.932400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.932644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.932665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.932936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.932967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.933309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.933349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.933658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.933690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.934021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.934052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.934310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.934332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.934622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.934655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.934947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.934979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.935281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.935313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.935545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.935577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.935810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.935843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.936119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.936151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.936417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.936438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.936627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.936649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.936933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.936964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.937216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.937247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.937486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.937506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.937809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.937841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.938182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.938213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.275 [2024-07-16 00:56:53.938513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.275 [2024-07-16 00:56:53.938546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.275 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.938732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.938765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.939060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.939091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.939402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.939434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.939581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.939613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.939926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.939958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.940241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.940271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.940392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.940412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.940652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.940684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.940997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.941029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.941200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.941231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.941452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.941473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.941790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.941815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.942085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.942105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.942426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.942459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.942672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.942704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.943041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.943073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.943333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.943613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.943634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.943904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.943925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.944061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.944082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.944375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.944412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.944726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.944757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.945038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.945070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.945359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.945392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.945619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.945651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.945905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.945937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.946166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.946197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.946559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.946592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.946838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.946870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.947032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.947064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.947351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.947384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.947620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.947652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.947884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.947904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.948206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.948238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.276 [2024-07-16 00:56:53.948472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.276 [2024-07-16 00:56:53.948492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.276 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.948718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.948738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.948982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.949003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.949314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.949334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.949569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.949601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.949836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.949869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.950180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.950212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.950572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.950604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.950906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.950937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.951169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.951200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.951432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.951453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.951659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.951681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.951889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.951910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.952213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.952244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.952582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.952615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.952847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.952878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.953122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.953154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.953442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.953486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.953652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.953684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.953933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.953964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.954301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.954332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.954610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.954643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.954866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.954897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.955185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.955216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.955537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.955559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.955850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.955871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.956177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.956208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.956461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.956494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.956835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.956869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.957027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.957059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.957234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.957264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.957544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.957565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.957728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.957749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.958013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.958045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.958356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.958666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.958697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.958930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.958961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.959270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.959302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.959621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.959653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.959884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.959916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.960231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.277 [2024-07-16 00:56:53.960273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.277 qpair failed and we were unable to recover it. 00:30:36.277 [2024-07-16 00:56:53.960622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.960643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.960866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.960886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.961115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.961147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.961367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.961400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.961689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.961720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.961894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.961925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.962094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.962125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.962448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.962481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.962736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.962768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.963007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.963039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.963351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.963383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.963609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.963629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.963818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.963839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.963990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.964010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.964158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.964190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.964449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.964482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.964730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.964768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.965090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.965111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.965303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.965323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.965534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.965555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.965797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.965829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.966016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.966047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.966328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.966349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.966642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.966663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.966875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.966895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.967086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.967118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.967423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.967456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.967708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.967740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.967894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.967926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.968195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.968238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.968484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.968504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.968658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.968679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.969059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.969091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.969431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.969463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.969760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.969780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.970084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.970116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.970301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.970336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.970650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.970682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.970973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.971005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.971277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.971310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.278 qpair failed and we were unable to recover it. 00:30:36.278 [2024-07-16 00:56:53.971571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.278 [2024-07-16 00:56:53.971603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.971755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.972142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.972174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.972482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.972514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.972681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.972713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.973027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.973059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.973989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.974026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.974354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.974377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.974642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.974663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.974813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.974833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.975199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.975231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.975544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.975577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.975750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.975782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.976080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.976112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.976412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.976436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.976600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.976620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.976883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.976909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.977104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.977126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.977439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.977462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.977612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.977633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.977836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.977858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.978169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.978189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.978402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.978424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.978626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.978646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.978936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.978967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.979252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.979297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.979606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.979640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.979874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.979906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.980242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.980286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.980525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.980556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.980833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.980854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.981112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.981133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.981287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.981309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.981598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.981631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.981900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.981933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.982168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.982199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.982585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.982607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.982807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.982828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.984058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.984100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.984365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.984389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.984628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.279 [2024-07-16 00:56:53.984650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.279 qpair failed and we were unable to recover it. 00:30:36.279 [2024-07-16 00:56:53.984823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.984841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.985042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.985074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.985296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.985330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.985510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.985531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.985692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.985731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.985912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.985944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.986249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.986294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.986624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.986656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.987003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.987034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.987378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.987413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.987646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.987678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.987851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.987883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.988105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.988137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.988414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.988448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.988603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.988634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.988883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.988921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.989216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.989236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.989508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.989528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.989725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.989757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.990021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.990053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.990367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.990399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.990631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.990665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.991059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.991091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.991243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.991287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.991546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.991590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.991787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.991808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.992028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.992049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.992321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.992367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.992554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.992586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.992764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.992797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.993034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.993066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.993328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.993350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.993563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.993595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.993766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.993797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.280 [2024-07-16 00:56:53.994022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.280 [2024-07-16 00:56:53.994056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.280 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.994311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.994344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.994582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.994614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.994827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.994859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.995141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.995173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.995333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.995367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.995551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.995582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.995834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.995855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.996160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.996193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.996464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.996497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.996713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.996754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.996898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.996918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.997208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.997229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.997566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.997598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.997764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.997796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.998095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.998127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.998343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.998377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.998633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.998653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.998858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.998877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.999138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.999175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.999433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.999466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:53.999707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:53.999744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.000025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.000057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.000350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.000382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.000650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.000696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.000908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.000929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.001216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.001236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.001446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.001468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.001739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.001759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.001998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.002017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.002216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.002236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.002461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.002482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.002616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.002636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.002841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.002873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.003216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.003248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.003507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.003539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.003767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.003799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.004041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.004072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.004299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.004333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.004577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.004609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.004890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.281 [2024-07-16 00:56:54.004910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.281 qpair failed and we were unable to recover it. 00:30:36.281 [2024-07-16 00:56:54.005174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.005195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.005404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.005425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.005701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.005733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.006061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.006093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.006435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.006467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.006705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.006737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.006911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.006943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.007137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.007170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.007414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.007436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.007722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.007742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.007967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.007987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.008250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.008283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.008481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.008501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.008710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.008730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.008955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.008975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.009199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.009219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.009344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.009365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.009632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.009664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.009904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.009936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.010147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.010179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.010488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.010513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.010642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.010663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.010939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.010959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.011243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.011327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.011591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.011629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.011784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.011816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.012061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.012093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.012329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.012362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.012580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.012613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.012757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.012777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.012982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.013027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.013190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.013222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.013452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.013486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.013780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.013801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.014023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.014043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.014247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.014279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.282 [2024-07-16 00:56:54.014398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.282 [2024-07-16 00:56:54.014418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.282 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.014668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.014700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.014845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.014877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.015092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.015124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.015340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.015373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.015583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.015603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.015807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.015828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.015943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.015965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.016172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.016192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.016403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.016423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.016712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.016744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.017109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.017184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.017389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.017425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.017686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.017718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.017934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.017966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.018144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.018176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.018402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.018435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.018664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.018696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.018981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.019013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.019182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.019214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.019483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.019516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.019681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.019712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.020000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.020032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.020281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.020314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.020626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.020667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.020811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.020843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.020978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.021003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.021248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.021290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.021581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.021613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.021839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.021871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.022092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.022123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.022353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.022385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.022548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.022580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.022898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.022929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.023274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.023307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.023520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.023551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.023856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.023889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.024107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.024138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.024317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.024351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.024571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.024602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.283 [2024-07-16 00:56:54.024815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.283 [2024-07-16 00:56:54.024847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.283 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.025000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.025031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.025282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.025315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.025467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.025498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.025636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.025668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.025984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.026162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.026182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.026441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.026462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.026739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.026759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.026950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.026969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.027283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.027316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.027633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.027665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.027979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.028011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.028235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.028281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.028558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.028590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.028921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.028953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.029276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.029309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.029551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.029582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.029808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.029839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.030076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.030109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.030367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.030401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.030579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.030600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.030864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.030896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.031150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.031182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.031494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.031532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.031666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.031687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.031894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.031925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.032084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.032116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.032432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.032479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.032693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.032713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.032854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.032874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.033060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.033080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.033369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.033402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.033729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.033760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.033992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.034024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.034321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.284 qpair failed and we were unable to recover it. 00:30:36.284 [2024-07-16 00:56:54.034579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.284 [2024-07-16 00:56:54.034610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.034777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.034808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.035058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.035090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.035346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.035379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.035585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.035604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.035840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.035860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.036067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.036087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.036371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.036392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.036576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.036597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.036822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.036842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.037054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.037074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.037204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.037224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.037438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.037460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.037658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.037678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.037798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.037816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.038017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.038038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.038275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.038296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.038484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.038505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.038696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.038727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.038886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.038917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.039146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.039177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.039401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.039433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.039655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.039686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.039970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.040002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.040313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.040345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.040628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.040660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.040880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.040913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.041157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.041189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.041397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.041435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.041667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.041687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.041943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.041975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.042210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.042242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.042486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.042518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.042720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.042741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.043003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.043034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.043350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.043398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.043611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.043630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.043775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.043796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.044053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.285 [2024-07-16 00:56:54.044073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.285 qpair failed and we were unable to recover it. 00:30:36.285 [2024-07-16 00:56:54.044269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.044289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.044629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.044661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.044945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.044976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.045137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.045168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.045483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.045516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.045757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.045777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.046088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.046108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.046295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.046316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.046620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.046652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.046822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.046853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.047131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.047163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.047404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.047437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.047736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.047756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.048081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.048101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.048322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.048342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.048466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.048487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.048634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.048676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.048835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.048867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.049073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.049105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.049314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.049347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.049599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.049619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.049807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.049840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.050119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.050150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.050369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.050401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.050572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.050603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.050882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.050914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.051159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.051179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.051320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.051353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.051561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.051592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.051871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.051903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.052190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.052222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.052480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.052513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.052862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.052894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.053176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.053207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.053423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.053455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.053678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.053710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.054001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.054041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.054336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.054369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.054499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.054530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.054808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.054839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.286 [2024-07-16 00:56:54.055063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.286 [2024-07-16 00:56:54.055096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.286 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.055346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.055544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.055564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.055685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.055706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.055862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.055882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.056140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.056171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.056405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.056437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.056768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.056788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.057052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.057087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.057337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.057369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.057602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.057632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.057942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.058142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.058173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.058466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.058499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.058816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.058848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.059037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.059068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.059346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.059385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.059553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.059573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.059827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.059858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.060135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.060166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.060334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.060366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.060533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.060553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.060661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.060681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.060977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.061008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.061293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.061326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.061495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.061527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.061754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.061774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.061990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.062010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.062153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.062173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.062378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.062399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.062707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.062727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.062937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.062968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.063177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.063209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.063438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.063470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.063623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.063643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.063864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.063896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.064122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.064152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.064351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.064383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.064624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.064655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.064978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.065009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.065313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.287 [2024-07-16 00:56:54.065345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.287 qpair failed and we were unable to recover it. 00:30:36.287 [2024-07-16 00:56:54.065676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.065707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.066022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.066053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.066372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.066405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.066545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.066576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.066784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.066804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.067014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.067034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.067283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.067304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.067569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.067589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.067716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.067736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.067989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.068020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.068168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.068503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.068535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.068784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.068815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.069018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.069049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.069203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.069234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.069464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.069501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.069781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.069812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.069971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.070002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.070298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.070331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.070466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.070496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.070793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.070824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.071163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.071182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.071382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.071402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.071580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.071600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.071783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.071813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.072108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.072140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.072373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.072405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.072720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.072751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.072932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.073162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.073194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.073439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.073471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.073710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.073751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.074045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.074076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.074362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.074395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.074543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.074574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.074821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.074852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.075150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.075170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.075395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.075427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.075669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.075700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.075975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.076011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.076217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.288 [2024-07-16 00:56:54.076249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.288 qpair failed and we were unable to recover it. 00:30:36.288 [2024-07-16 00:56:54.076567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.076599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.076946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.076979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.077184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.077452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.077484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.077730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.077761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.078051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.078100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.078408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.078441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.078646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.078676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.078926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.078957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.079113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.079144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.079421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.079462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.079712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.079731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.080001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.080021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.080276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.080309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.080519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.080555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.080830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.080869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.081120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.081139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.081320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.081341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.081622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.081641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.081765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.081785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.081982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.082002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.082169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.082200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.082412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.082444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.082667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.082709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.082962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.082982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.083224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.083266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.083574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.083605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.083752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.083771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.084869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.084900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.085146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.289 [2024-07-16 00:56:54.085177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.289 qpair failed and we were unable to recover it. 00:30:36.289 [2024-07-16 00:56:54.085502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.290 [2024-07-16 00:56:54.085534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.290 qpair failed and we were unable to recover it. 00:30:36.290 [2024-07-16 00:56:54.085851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.290 [2024-07-16 00:56:54.085884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.290 qpair failed and we were unable to recover it. 00:30:36.290 [2024-07-16 00:56:54.086117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.290 [2024-07-16 00:56:54.086148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.290 qpair failed and we were unable to recover it. 00:30:36.290 [2024-07-16 00:56:54.086355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.290 [2024-07-16 00:56:54.086387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.290 qpair failed and we were unable to recover it. 00:30:36.290 [2024-07-16 00:56:54.086614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.290 [2024-07-16 00:56:54.086645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.290 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.086937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.086983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.087188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.087209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.087459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.087479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.087600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.087619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.087827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.087846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.088036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.088055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.088272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.088293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.088490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.088509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.088767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.088786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.088966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.088986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.089093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.089113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.089313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.089333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.089530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.562 [2024-07-16 00:56:54.089550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.562 qpair failed and we were unable to recover it. 00:30:36.562 [2024-07-16 00:56:54.089738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.089769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.089998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.090034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.090324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.090355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.090597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.090627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.090903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.090933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.091085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.091115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.091344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.091375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.091724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.091755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.091979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.092011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.092143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.092174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.092450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.092483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.092690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.092721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.092934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.092965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.093168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.093198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.093507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.093539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.093695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.093726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.093968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.093998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.094207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.094239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.094474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.094505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.094720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.094968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.094987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.095252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.095293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.095437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.095468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.095699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.095729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.095955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.095974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.096268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.096301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.096604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.096634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.096840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.096871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.097100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.097132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.097431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.097463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.097685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.097716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.097987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.098018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.098251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.098291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.098527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.098558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.098763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.098793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.099089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.099108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.099328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.099361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.099575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.099595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.099871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.099901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.563 qpair failed and we were unable to recover it. 00:30:36.563 [2024-07-16 00:56:54.100117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.563 [2024-07-16 00:56:54.100147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.100296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.100328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.100530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.100571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.100808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.100839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.101138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.101169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.101371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.101403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.101647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.101677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.101955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.101985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.102203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.102234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.102456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.102476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.102684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.103036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.103066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.103288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.103320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.103618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.103649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.103865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.103895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.104171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.104202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.104476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.104507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.104729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.104760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.105061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.105091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.105327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.105360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.105693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.105724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.105942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.105972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.106102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.106132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.106431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.106463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.106603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.106622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.106822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.106841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.107111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.107130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.107307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.107327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.107436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.107466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.107688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.107719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.107994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.108026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.108313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.108363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.108521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.108551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.108797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.108828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.109147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.109167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.109439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.109648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.109667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.109862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.109881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.110081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.110100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.110375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.564 [2024-07-16 00:56:54.110395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.564 qpair failed and we were unable to recover it. 00:30:36.564 [2024-07-16 00:56:54.110664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.110695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.111005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.111036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.111265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.111302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.111471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.111502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.111740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.111771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.112045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.112075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.112281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.112313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.112601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.112632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.112935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.112966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.113127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.113158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.113433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.113466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.113681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.113712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.113934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.113966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.114191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.114210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.114399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.114419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.114614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.114633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.114747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.114766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.115043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.115295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.115628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.115659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.115933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.116108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.116139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.116342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.116373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.116593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.116624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.116911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.116952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.117193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.117223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.117457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.117489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.117710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.117740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.118029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.118060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.118287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.118320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.118597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.118627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.118894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.118926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.119144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.119164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.119346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.119366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.119613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.119632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.119918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.119949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.120172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.120203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.120373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.120404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.565 qpair failed and we were unable to recover it. 00:30:36.565 [2024-07-16 00:56:54.120551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.565 [2024-07-16 00:56:54.120582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.120773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.120804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.121006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.121037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.121329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.121360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.121626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.121958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.121989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.122196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.122227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.122467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.122499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.122731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.122762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.123038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.123057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.123237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.123271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.123453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.123472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.123739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.123758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.123950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.123969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.124095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.124114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.124242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.124270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.124485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.124516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.124740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.124770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.125059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.125089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.125316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.125349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.125567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.125598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.125746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.125777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.125998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.126028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.126274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.126306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.126449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.126480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.126783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.126814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.127045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.127075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.127230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.127577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.127608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.127816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.127835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.128012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.128031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.128250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.128276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.128562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.128593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.128794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.128826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.129032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.129322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.129343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.129461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.129480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.129727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.129758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.566 [2024-07-16 00:56:54.129985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.566 [2024-07-16 00:56:54.130016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.566 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.130289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.130322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.130546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.130577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.130807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.130826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.131050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.131069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.131323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.131354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.131521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.131557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.131760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.131790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.132031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.132061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.132334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.132366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.132662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.132692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.132871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.132901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.133119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.133150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.133446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.133478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.133610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.133629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.133881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.133911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.134161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.134191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.134517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.134549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.134833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.134852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.135156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.135187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.135397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.135430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.135631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.135662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.135893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.135912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.136027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.136277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.136308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.136560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.136590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.136860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.136879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.137072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.137103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.137416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.137447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.137648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.137679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.137875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.137894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.138080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.138099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.138389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.138408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.138656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.138676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.138934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.138953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.567 [2024-07-16 00:56:54.139202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.567 [2024-07-16 00:56:54.139221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.567 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.139527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.139558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.139857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.139889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.140199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.140218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.140480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.140499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.140790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.140821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.141089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.141119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.141275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.141307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.141467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.141497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.141765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.141796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.142064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.142094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.142312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.142336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.142622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.142653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.142871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.142901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.143121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.143152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.143380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.143413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.143628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.143659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.143856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.143886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.144035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.144066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.144274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.144305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.144511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.144541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.144737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.144756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.145004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.145035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.145176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.145206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.145537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.145568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.145796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.145827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.146048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.146078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.146187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.146218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.146451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.146482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.146792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.146822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.147049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.147080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.147286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.147317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.147554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.147585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.147810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.147829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.147974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.147993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.148166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.148185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.148320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.148362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.148494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.148524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.148840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.148908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.149091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.149125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.568 qpair failed and we were unable to recover it. 00:30:36.568 [2024-07-16 00:56:54.149309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.568 [2024-07-16 00:56:54.149344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.149580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.149601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.149783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.149802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.149969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.150000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.150232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.150270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.150556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.150587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.150858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.150889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.151029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.151059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.151300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.151332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.151603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.151644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.151827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.151846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.152030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.152055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.152243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.152273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.152390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.152412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.152661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.152679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.152860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.152890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.153182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.153213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.153438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.153469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.153670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.153700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.153996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.154027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.154245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.154285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.154424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.154455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.154676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.154706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.154936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.154966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.155229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.155248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.155546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.155577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.155809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.155839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.156058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.156088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.156289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.156321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.156566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.156597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.156818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.156837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.157088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.157125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.157443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.157474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.157760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.157791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.158091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.158122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.158332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.158363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.158576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.158607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.158873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.158903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.159123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.159142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.159315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.159334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.569 qpair failed and we were unable to recover it. 00:30:36.569 [2024-07-16 00:56:54.159598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.569 [2024-07-16 00:56:54.159617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.159837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.159868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.160083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.160114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.160384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.160417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.160687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.160717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.160987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.161027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.161150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.161170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.161359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.161390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.161684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.161715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.161983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.162014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.162284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.162316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.162524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.162559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.162722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.162753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.163024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.163065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.163252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.163287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.163418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.163437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.163689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.163933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.163963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.164233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.164275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.164431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.164461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.164754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.164785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.165085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.165116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.165278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.165310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.165528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.165559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.165712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.165742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.165962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.165981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.166099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.166118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.166230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.166250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.166541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.166572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.166721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.166750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.167026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.167057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.167281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.167301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.167544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.167810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.167829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.168044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.168063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.168189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.168207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.168452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.168471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.168669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.168700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.168946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.169016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.570 [2024-07-16 00:56:54.169350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.570 [2024-07-16 00:56:54.169388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.570 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.169619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.169651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.169876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.169910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.170039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.170070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.170244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.170283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.170469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.170501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.170715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.170746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.170892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.170911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.171042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.171079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.171285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.171317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.171473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.171504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.171798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.171829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.172054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.172072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.172248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.172275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.172526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.172557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.172758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.172789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.173014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.173044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.173215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.173247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.173512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.173543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.173812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.173853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.174069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.174087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.174334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.174353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.174544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.174563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.174689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.174708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.174951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.174971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.175154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.175186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.175460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.175492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.175744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.175763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.175939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.175958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.176192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.176223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.176396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.176427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.176664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.176695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.177015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.177034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.177240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.177265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.571 [2024-07-16 00:56:54.177460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.571 [2024-07-16 00:56:54.177479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.571 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.177696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.177726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.177858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.177889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.178032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.178063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.178195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.178225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.178426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.178449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.178648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.178680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.178884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.178915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.179041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.179071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.179203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.179221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.179507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.179527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.179705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.179724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.179850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.179881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.180084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.180115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.180273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.180305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.180455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.180473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.180675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.180706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.180989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.181020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.181227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.181278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.181541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.181572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.181692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.181953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.181984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.182266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.182286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.182502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.182522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.182819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.182838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.183051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.183070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.183270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.183290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.183433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.183452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.183650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.183680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.183839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.183871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.184140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.184170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.184464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.184484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.184606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.184626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.184770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.184788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.184969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.185000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.185134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.185165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.185374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.185407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.185629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.185660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.185881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.185911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.572 [2024-07-16 00:56:54.186115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.572 [2024-07-16 00:56:54.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.572 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.186311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.186331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.186438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.186457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.186558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.186577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.186759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.186778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.186955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.186974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.187264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.187300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.187443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.187474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.187614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.187645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.187848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.187878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.188147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.188178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.188380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.188412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.188683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.188714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.188929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.188959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.189200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.189230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.189379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.189411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.189619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.189649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.189792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.189812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.189938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.189957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.190134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.190153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.190362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.190394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.190596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.190626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.190762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.190793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.190930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.190961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.191229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.191270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.191412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.191443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.191718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.191748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.192057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.192088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.192375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.192407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.192677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.192696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.192906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.192925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.193182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.193219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.193443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.193475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.193696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.193727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.194049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.194080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.194352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.194384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.194528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.194558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.194771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.194802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.195017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.195037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.573 [2024-07-16 00:56:54.195152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.573 [2024-07-16 00:56:54.195183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.573 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.195447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.195478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.195685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.195715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.196005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.196043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.196246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.196288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.196613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.196644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.196853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.196884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.197090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.197112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.197371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.197403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.197553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.197583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.197717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.197760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.197952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.197971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.198081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.198112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.198242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.198284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.198578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.198608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.198747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.198777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.198988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.199018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.199267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.199286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.199494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.199513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.199629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.199665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.199937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.199967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.200126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.200157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.200372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.200403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.200692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.200723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.201021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.201053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.201252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.201295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.201559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.201590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.201738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.201769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.201912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.201942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.202144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.202174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.202408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.202439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.202641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.202670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.202961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.202979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.203246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.203277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.203539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.203558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.203750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.203769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.203982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.204106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.204248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.204430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.204590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.574 [2024-07-16 00:56:54.204788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.574 [2024-07-16 00:56:54.204807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.574 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.205939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.205961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.206867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.206886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.207132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.207151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.207308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.207327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.207463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.207482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.207732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.207751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.208931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.209195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.209214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.209335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.209355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.209482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.209501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.209687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.209706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.209890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.209909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.210057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.210190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.210337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.210472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.210802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.210998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.211018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.211238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.211265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.575 [2024-07-16 00:56:54.211454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.575 [2024-07-16 00:56:54.211473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.575 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.211652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.211672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.211916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.211935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.212050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.212069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.212185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.212204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.212401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.212420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.212666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.212685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.212844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.212863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.213035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.213183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.213378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.213603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.213809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.213997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.214017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.214173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.214191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.214383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.214403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.214532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.214564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.214806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.214836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.215034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.215064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.215354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.215374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.215548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.215567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.215794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.215825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.215960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.215991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.216197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.216229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.216422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.216443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.216567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.216587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.216860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.216879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.217067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.217088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.217277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.217297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.217557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.217577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.217789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.217810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.218055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.218074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.218220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.218251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.218466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.218497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.218627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.218658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.218865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.218895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.219110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.219140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.219387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.219407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.219592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.219611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.219735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.219753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.220037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.576 [2024-07-16 00:56:54.220068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.576 qpair failed and we were unable to recover it. 00:30:36.576 [2024-07-16 00:56:54.220284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.220317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.220494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.220524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.220763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.220794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.221018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.221048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.221282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.221303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.221563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.221583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.221873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.221892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.222032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.222062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.222214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.222244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.222490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.222521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.223936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.223972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.224178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.224198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.224386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.224406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.224548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.224567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.224757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.224791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.226091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.226124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.226302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.226323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.226440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.226471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.226645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.226677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.226891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.226910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.227114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.227135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.227333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.227352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.227553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.227583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.227748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.227779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.227952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.227973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.228164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.228183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.228393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.228414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.228531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.228551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.228669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.228688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.228805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.228824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.230389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.230422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.230661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.230682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.230802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.230820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.230993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.231022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.231276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.231309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.231467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.231497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.231766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.231796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.232012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.232033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.232165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.232183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.232374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.232392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.577 [2024-07-16 00:56:54.232513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.577 [2024-07-16 00:56:54.232531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.577 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.232779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.232809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.232950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.232979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.234166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.234198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.234487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.234509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.234636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.234654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.234872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.234891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.235061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.235079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.235209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.235238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.235548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.235580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.235794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.235831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.235982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.236012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.236224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.236273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.236434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.236452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.236726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.236745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.236930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.236949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.237088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.237117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.237303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.237335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.237569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.237601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.237752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.237781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.237994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.238026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.238173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.238203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.238423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.238455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.238592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.238621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.238842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.238873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.239031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.239062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.239409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.239440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.239583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.239612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.239843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.239874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.240097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.240128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.240336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.240367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.240571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.240590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.240798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.240817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.240960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.241113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.241131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.241341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.241360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.241554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.241572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.241756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.241785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.242075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.242105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.242273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.242305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.242515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.242534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.578 [2024-07-16 00:56:54.242672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.578 [2024-07-16 00:56:54.242689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.578 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.242865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.242884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.243148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.243178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.243415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.243434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.243616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.243646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.243792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.243821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.243951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.243980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.244189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.244219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.244486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.244518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.244667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.244689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.244941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.244971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.245205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.245236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.245502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.245521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.245726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.245757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.245970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.246000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.246214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.246244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.246456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.246487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.246696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.246727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.246942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.246984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.247102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.247120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.247301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.247320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.247481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.247521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.247728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.247760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.247976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.248007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.248244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.248270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.248472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.248491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.248703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.248733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.248969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.248999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.249141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.249160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.249343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.249375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.249578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.249608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.249756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.249786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.250099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.250130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.250274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.250306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.250420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.250451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.250591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.250610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.250800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.250820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.251023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.251042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.251192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.251211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.251391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.251411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.251581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.251600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.579 qpair failed and we were unable to recover it. 00:30:36.579 [2024-07-16 00:56:54.251780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.579 [2024-07-16 00:56:54.251799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.251930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.251948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.252065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.252084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.252331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.252350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.252477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.252496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.252695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.252714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.252888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.252906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.253086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.253105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.253216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.253238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.253437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.253456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.253656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.253674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.253929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.253948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.254157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.254175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.254282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.254302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.254490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.254509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.254645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.254663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.254852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.254870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.255846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.255865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.256137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.256155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.256337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.256357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.256538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.256557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.256746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.256764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.256983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.257110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.257312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.257535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.257663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.257934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.257952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.258129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.258148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.258283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.258303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.580 [2024-07-16 00:56:54.258494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.580 [2024-07-16 00:56:54.258513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.580 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.258628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.258647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.258949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.258968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.259104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.259123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.259305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.259324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.259551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.259570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.259686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.259704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.259810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.259829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.260962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.260983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.261969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.261988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.262263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.262282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.262459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.262478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.262665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.262684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.262865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.262884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.263081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.263203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.263403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.263549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.263785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.263983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.264933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.264952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.265132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.265154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.265411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.265431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.265678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.265698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.265885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.581 [2024-07-16 00:56:54.265904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.581 qpair failed and we were unable to recover it. 00:30:36.581 [2024-07-16 00:56:54.266113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.266133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.266263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.266283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.266482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.266501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.266623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.266643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.266823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.266842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.267947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.267966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.268170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.268310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.268507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.268744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.268877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.268998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.269271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.269413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.269556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.269771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.269913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.269932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.270078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.270227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.270363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.270583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.270807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.270986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.271005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.271205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.271225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.271505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.271525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.271780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.271800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.271994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.272230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.272385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.272527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.272724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.272939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.582 [2024-07-16 00:56:54.272959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.582 qpair failed and we were unable to recover it. 00:30:36.582 [2024-07-16 00:56:54.273097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.273265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.273470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.273604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.273711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.273914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.273933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.274106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.274125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.274314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.274334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.274515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.274534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.274744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.274763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.274944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.274963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.275203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.275222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.275417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.275437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.275562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.275580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.275691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.275709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.275833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.275852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.276103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.276122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.276319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.276339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.276460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.276479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.276755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.276774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.276897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.276916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.277130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.277148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.277324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.277344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.277519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.277538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.277719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.277738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.277918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.277938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.278186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.278205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.278404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.278424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.278644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.278663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.278809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.278828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.279021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.279041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.279339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.279358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.279488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.279507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.279693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.279712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.279832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.279851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.280102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.280121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.280298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.280318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.280567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.280586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.583 [2024-07-16 00:56:54.280796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.583 [2024-07-16 00:56:54.280815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.583 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.280950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.280969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.281154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.281173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.281380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.281400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.281591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.281610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.281806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.281828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.282055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.282074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.282397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.282417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.282674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.282693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.282809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.282829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.283016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.283035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.283209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.283227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.283353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.283373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.283619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.283638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.283885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.283904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.284087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.284107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.284230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.284250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.284369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.284388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.284638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.284708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.284876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.284910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.285098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.285118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.285229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.285249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.285376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.285395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.285645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.285664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.285803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.285822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.286959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.286977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.287153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.287172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.287376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.287396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.287583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.287602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.287775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.287794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.584 qpair failed and we were unable to recover it. 00:30:36.584 [2024-07-16 00:56:54.287995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.584 [2024-07-16 00:56:54.288014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.288201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.288220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.288348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.288367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.288474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.288493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.288695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.288715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.288897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.288916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.289036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.289055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.289338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.289358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.289551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.289570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.289767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.289786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.290090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.290330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.290350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.290476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.290495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.290726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.290745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.290951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.290970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.291142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.291161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.291436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.291456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.291663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.291682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.291884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.291903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.292017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.292036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.292231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.292251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.292532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.292551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.292722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.292741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.292863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.292881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.293063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.293082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.293270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.293290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.293415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.293629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.293649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.293840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.293860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.294880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.294899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.295143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.295161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.295351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.295371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.295622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.295642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.295821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.295840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.295944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.295964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.585 [2024-07-16 00:56:54.296215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.585 [2024-07-16 00:56:54.296233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.585 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.296442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.296462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.296595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.296614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.296798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.296817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.297953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.297972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.298103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.298125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.298280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.298300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.298461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.298481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.298598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.298617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.298897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.298916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.299089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.299108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.299289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.299308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.299533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.299552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.299721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.299740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.299932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.299951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.300074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.300093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.300197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.300217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.300410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.300429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.300641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.300660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.300943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.300962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.301211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.301230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.301392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.301411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.301645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.301664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.301969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.301989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.302181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.302200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.302424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.302443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.302567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.302587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.302830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.302850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.302977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.302997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.303211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.303230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.303371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.303391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.303608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.303627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.303751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.303770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.303875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.303895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.304071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.304090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.304201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.304220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.586 qpair failed and we were unable to recover it. 00:30:36.586 [2024-07-16 00:56:54.304519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.586 [2024-07-16 00:56:54.304539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.304714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.304733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.304975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.304995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.305106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.305125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.305320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.305340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.305450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.305469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.305653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.305672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.305939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.305958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.306112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.306130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.306382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.306405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.306682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.306701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.306831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.306850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.306976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.306995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.307212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.307231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.307427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.307446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.307641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.307660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.307839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.307858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.307966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.307985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.308270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.308289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.308510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.308529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.308718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.308737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.308956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.308975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.309128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.309147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.309397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.309418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.309570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.309589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.309787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.309805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.309985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.310004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.310194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.310214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.310479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.310499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.310719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.310737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.310938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.310957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.311074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.311093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.311358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.311377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.311518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.311537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.311716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.311735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.311943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.311962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.312148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.312167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.312411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.312430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.312565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.312584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.312828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.312847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.587 [2024-07-16 00:56:54.313009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.587 [2024-07-16 00:56:54.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.587 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.313278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.313298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.313412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.313432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.313605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.313624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.313921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.313940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.314044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.314063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.314267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.314287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.314538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.314571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.314711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.314738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.315007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.315042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.315184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.315209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.315425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.315459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.315724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.315748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.316025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.316045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.316220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.316239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.316453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.316473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.316655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.316674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.316921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.316940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.317197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.317217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.317341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.317360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.317536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.317555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.317687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.317706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.317883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.317902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.318948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.318967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.319246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.319273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.319468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.319487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.319666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.319684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.319869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.588 [2024-07-16 00:56:54.320007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.588 [2024-07-16 00:56:54.320026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.588 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.320138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.320156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.320372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.320404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.320610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.320629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.320817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.320836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.321022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.321041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.321234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.321253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.321447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.321466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.321597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.321616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.321805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.321824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.322924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.322942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.323045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.323066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.323268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.323288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.323598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.323617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.323744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.323762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.323885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.324179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.324198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.324387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.324406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.324590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.324609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.324737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.324756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.325905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.325924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.326148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.326167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.326346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.326365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.326575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.326594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.326731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.326749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.326856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.326876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.327053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.327072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.327196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.589 [2024-07-16 00:56:54.327215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.589 qpair failed and we were unable to recover it. 00:30:36.589 [2024-07-16 00:56:54.327408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.327428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.327658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.327865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.327884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.328016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.328244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.328270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.328391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.328410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.328585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.328604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.328875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.328894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.329139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.329158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.329371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.329390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.329583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.329794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.329813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.330957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.330976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.331108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.331130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.331266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.331286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.331564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.331584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.331828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.331846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.332028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.332046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.332343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.332363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.332488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.332507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.332690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.332709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.332930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.332949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.333122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.333141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.333335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.333354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.333543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.333562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.333763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.333782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.334007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.334025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.334210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.334509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.334529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.334724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.334742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.334935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.334954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.335148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.335166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.335359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.335378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.335485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.335504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.335748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.335767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.590 [2024-07-16 00:56:54.335955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.590 [2024-07-16 00:56:54.335973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.590 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.336085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.336103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.336228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.336246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.336504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.336523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.336640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.336659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.336850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.336869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.337078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.337097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.337198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.337217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.337318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.337338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.337571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.337641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.337809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.337843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.338894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.338913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.339093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.339112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.339331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.339353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.339635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.339654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.339781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.339801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.339977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.339996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.340246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.340271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.340522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.340541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.340664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.340683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.340800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.340819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.341015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.341034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.341129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.341148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.341280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.341300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.341571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.341590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.341863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.341882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.342958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.342977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.343174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.343192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.343393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.343412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.343593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.591 [2024-07-16 00:56:54.343612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.591 qpair failed and we were unable to recover it. 00:30:36.591 [2024-07-16 00:56:54.343825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.343844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.343978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.343997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.344170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.344188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.344363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.344383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.344628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.344647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.344842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.344861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.345977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.345996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.346239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.346275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.346467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.346486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.346668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.346687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.346932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.346951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.347192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.347211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.347455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.347474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.347626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.347648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.347919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.347938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.348074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.348093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.348357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.348377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.348552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.348571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.348869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.348888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.349139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.349158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.349372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.349391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.349575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.349593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.349836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.349855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.350067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.350086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.350271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.350290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.350567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.350586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.350762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.350781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.350958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.350977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.351097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.351116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.351341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.351361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.351482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.351501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.351697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.351716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.351939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.351957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.352176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.352195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.352511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.352530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.592 [2024-07-16 00:56:54.352829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.592 [2024-07-16 00:56:54.352847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.592 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.353052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.353365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.353385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.353581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.353600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.353896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.353915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.354101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.354120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.354381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.354400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.354626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.354644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.354836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.354855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.355089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.355107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.355247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.355274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.355457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.355476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.355793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.355811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.356007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.356026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.356308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.356328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.356469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.356488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.356665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.356684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.356859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.356877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.357882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.357901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.358144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.358163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.358352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.358370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.358614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.358633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.358877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.358895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.359081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.359100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.359295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.359315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.359563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.359581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.359824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.359843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.360093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.360112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.360294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.360315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.360560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.360579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.593 qpair failed and we were unable to recover it. 00:30:36.593 [2024-07-16 00:56:54.360773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.593 [2024-07-16 00:56:54.360792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.360991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.361010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.361209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.361228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.361359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.361380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.361656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.361674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.361928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.361947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.362053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.362072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.362179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.362199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.362384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.362404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.362661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.362681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.362874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.362894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.363070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.363089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.363199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.363218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.363446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.363466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.363660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.363679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.363955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.363975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.364149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.364168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.364482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.364502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.364691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.364709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.364985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.365003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.365176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.365196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.365311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.365331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.365543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.365562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.365701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.365724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.366017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.366036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.366211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.366230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.366432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.366452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.366551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.366570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.366815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.366834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.367107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.367125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.367220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.367238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.367437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.367456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.367669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.367688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.367861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.367881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.368129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.368148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.368324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.368344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.368530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.368549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.368686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.368705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.368924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.368943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.369097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.369116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.369305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.594 [2024-07-16 00:56:54.369324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.594 qpair failed and we were unable to recover it. 00:30:36.594 [2024-07-16 00:56:54.369508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.369527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.369703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.369722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.369967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.369986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.370178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.370197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.370357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.370376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.370635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.370654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.370789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.370808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.370918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.370937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.371124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.371142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.371414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.371435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.371564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.371583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.371691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.371709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.371901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.371920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.372165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.372184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.372351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.372371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.372493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.372512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.372789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.372808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.372984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.373004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.373198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.373218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.373384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.373402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.373674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.373693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.373868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.373886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.373981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.374004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.374262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.374281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.374525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.374544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.374805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.374824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.374941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.374959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.375141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.375160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.375433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.375452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.375680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.375698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.375881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.375901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.376196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.376215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.376461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.376482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.376753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.376772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.376956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.376975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.377081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.377100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.377382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.377401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.377591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.377611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.377703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.377723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.377814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.377833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.595 [2024-07-16 00:56:54.378078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.595 [2024-07-16 00:56:54.378097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.595 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.378300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.378319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.378493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.378512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.378712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.378731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.379007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.379026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.379219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.379239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.379493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.379512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.379699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.379719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.379981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.380001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.380248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.380275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.380481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.380500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.380624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.380643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.380836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.380855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.381039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.381057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.381236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.381263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.381445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.381464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.381708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.381727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.381920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.381939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.382199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.382217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.382350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.382370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.382634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.382653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.382753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.382778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.383039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.383058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.383252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.383279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.383477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.383496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.383626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.383644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.383907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.383925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.384109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.384127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.384406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.384425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.384616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.384635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.384757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.384776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.385022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.385041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.385161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.385180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.385421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.385440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.385617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.385636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.385907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.385926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.386107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.386126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.386383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.386403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.386598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.386616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.386890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.386909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.387129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.596 [2024-07-16 00:56:54.387148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.596 qpair failed and we were unable to recover it. 00:30:36.596 [2024-07-16 00:56:54.387280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.387300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.387506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.387528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.387863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.387883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.388069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.388089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.388277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.388297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.388490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.388508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.388783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.388801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.389014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.389033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.389205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.389228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.389405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.389425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.389582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.389601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.389846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.389865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.390058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.390077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.390215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.390234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.390464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.390483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.390685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.390704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.390826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.390845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.391098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.391117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.391298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.391318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.391440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.391458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.391712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.869 [2024-07-16 00:56:54.391730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.869 qpair failed and we were unable to recover it. 00:30:36.869 [2024-07-16 00:56:54.391949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.391968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.392157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.392176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.392299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.392319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.392432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.392451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.392702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.392722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.392856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.392875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.393081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.393099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.393369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.393625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.393644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.393855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.393874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.394070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.394090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.394387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.394406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.394679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.394698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.394808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.394827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.395079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.395098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.395344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.395363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.395555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.395574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.395685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.395703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.395893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.395912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.396055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.396074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.396354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.396374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.396592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.396610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.396823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.396843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.396955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.396975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.397163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.397182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.397429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.397448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.397642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.397662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.397937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.397960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.398138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.398157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.398375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.398394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.398607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.398625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.398870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.398889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.398974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.398992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.399269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.399289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.399497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.399517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.399704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.399723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.399827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.399846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.400063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.400082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.400208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.400227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.400507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.400526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.400674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.400693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.870 [2024-07-16 00:56:54.400978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.870 [2024-07-16 00:56:54.400997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.870 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.401185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.401205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.401336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.401356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.401464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.401483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.401686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.401705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.401825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.401844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.402020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.402039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.402286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.402306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.402429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.402448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.402638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.402657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.402834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.402853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.403042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.403061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.403251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.403278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.403555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.403574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.403700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.403718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.403894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.403913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.404169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.404188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.404387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.404407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.404580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.404600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.404742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.404761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.405964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.405984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.406175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.406197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.406424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.406443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.406693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.406711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.406898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.406917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.407094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.407114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.407293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.407313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.407486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.407505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.407696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.407715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.407916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.407935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.408161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.408180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.408321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.408341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.408609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.408627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.408751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.408770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.409013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.409031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.409237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.409263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.409449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.871 [2024-07-16 00:56:54.409469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.871 qpair failed and we were unable to recover it. 00:30:36.871 [2024-07-16 00:56:54.409653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.409672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.409885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.409904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.410007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.410026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.410199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.410218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.410448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.410467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.410659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.410677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.410810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.410829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.411079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.411097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.411221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.411240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.411433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.411453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.411674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.411824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.411843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.412116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.412134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.412216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.412234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.412490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.412509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.412646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.412666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.412860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.412879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.413135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.413154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.413283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.413303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.413420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.413439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.413627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.413646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.413923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.413942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.414188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.414207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.414457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.414476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.414635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.414658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.414833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.414852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.415126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.415341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.415483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.415687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.415882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.415985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.416004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.416198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.416217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.416493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.416513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.416621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.416640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.416777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.416797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.417045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.417065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.417219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.417238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.417428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.417448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.417746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.417765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.417900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.417920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.418090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.418110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.872 qpair failed and we were unable to recover it. 00:30:36.872 [2024-07-16 00:56:54.418294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.872 [2024-07-16 00:56:54.418314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.418498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.418517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.418787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.418807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.418933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.418952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.419269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.419289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.419540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.419559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.419677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.419695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.419890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.419910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.420081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.420283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.420490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.420667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.420809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.420987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.421006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.421250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.421276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.421522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.421541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.421718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.421738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.421857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.421876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.422025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.422042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.422262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.422281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.422469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.422489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.422661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.422680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.422929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.422951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.423126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.423145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.423443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.423464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.423666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.423685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.423874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.423894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.424009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.424029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.424268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.424288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.424488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.424507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.424777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.424797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.424978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.424998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.425181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.425200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.425377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.425396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.425588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.873 [2024-07-16 00:56:54.425607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.873 qpair failed and we were unable to recover it. 00:30:36.873 [2024-07-16 00:56:54.425818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.425837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.425978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.425998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.426153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.426171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.426444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.426464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.426760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.426779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.426889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.426907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.427096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.427114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.427360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.427381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.427635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.427654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.427952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.427971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.428163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.428182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.428428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.428447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.428637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.428656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.428792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.428810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.429060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.429078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.429265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.429284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.429475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.429494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.429737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.429756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.429880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.429899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.430145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.430163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.430284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.430304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.430461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.430479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.430688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.430707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.430830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.430850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.431079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.431098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.431283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.431303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.431485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.431504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.431710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.431732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.432033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.432053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.432200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.432219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.432401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.432420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.432601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.432620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.432813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.432832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.433909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.433929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.434053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.434072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.434344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.434364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.434625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.434644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.434896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.434915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.435021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.874 [2024-07-16 00:56:54.435040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.874 qpair failed and we were unable to recover it. 00:30:36.874 [2024-07-16 00:56:54.435240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.435266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.435465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.435484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.435659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.435679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.435894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.435913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.436053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.436073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.436207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.436226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.436421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.436441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.436548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.436566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.436850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.436869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.437086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.437105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.437383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.437402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.437648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.437667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.437802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.437821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.438124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.438143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.438475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.438495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.438619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.438883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.438902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.439122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.439142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.439329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.439349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.439598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.439617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.439815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.439834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.440973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.440991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.441204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.441224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.441433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.441452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.441662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.441681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.441806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.441825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.442890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.442909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.443183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.443202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.443403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.443423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.443555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.443574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.443751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.443770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.444013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.875 [2024-07-16 00:56:54.444033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.875 qpair failed and we were unable to recover it. 00:30:36.875 [2024-07-16 00:56:54.444220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.444240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.444388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.444408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.444538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.444663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.444683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.444927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.444946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.445121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.445140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.445411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.445431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.445635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.445654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.445853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.445872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.446086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.446105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.446349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.446369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.446667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.446686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.446829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.446847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.447033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.447053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.447178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.447407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.447426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.447671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.447691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.447871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.447890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.448026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.448044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.448223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.448242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.448343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.448362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.448611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.448633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.448879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.448899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.449018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.449035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.449252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.449278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.449401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.449421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.449548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.449567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.449827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.449846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.450064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.450083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.450284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.450600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.450619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.450817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.450836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.451038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.451057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.451314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.451334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.451522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.451541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.451722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.451742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.451943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.451962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.452236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.452262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.452391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.452410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.452538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.452556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.452666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.452686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.452898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.452916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.453100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.453120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.876 qpair failed and we were unable to recover it. 00:30:36.876 [2024-07-16 00:56:54.453264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.876 [2024-07-16 00:56:54.453283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.453398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.453540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.453558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.453708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.453727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.453936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.453955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.454172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.454191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.454339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.454359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.454496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.454515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.454705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.454725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.454918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.454937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.455111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.455130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.455388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.455408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.455525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.455544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.455717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.455736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.455967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.455986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.456177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.456196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.456424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.456443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.456638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.456658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.456854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.456877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.457093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.457113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.457274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.457294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.457534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.457554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.457726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.457745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.457859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.457878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.458004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.458023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.458175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.458194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.458307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.458326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.877 [2024-07-16 00:56:54.458447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.877 [2024-07-16 00:56:54.458467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.877 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.458580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.458599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.458784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.458803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.458976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.458995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.459178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.459196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.459316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.459336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.459530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.459550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.878 [2024-07-16 00:56:54.459722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.878 [2024-07-16 00:56:54.459741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.878 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.459954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.459973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.460246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.460273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.460394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.460616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.460635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.460820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.460839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.461053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.461072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.461324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.461344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.461480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.461499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.461725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.461745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.461882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.461901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.462151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.462170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.462294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.462313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.462508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.462527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.462746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.462766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.462987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.463006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.463236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.463262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.463387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.463406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.463656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.463675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.463919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.463938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.464061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.464080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.464217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.464236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.464447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.464515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.464813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.465013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.465054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.465354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.465387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.465542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.465572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.465723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.465753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.465910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.465933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.466147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.466166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.466282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.466302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.466579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.466598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.466777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.466796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.466930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.466948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.467083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.467102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.467228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.467247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.467392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.467411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.467659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.467679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.467883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.467902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.468012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.468030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.468193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.468212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.879 [2024-07-16 00:56:54.468385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.879 [2024-07-16 00:56:54.468405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.879 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.468651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.468669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.468848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.468867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.469053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.469072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.469264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.469284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.469397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.469415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.469601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.469620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.469882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.469901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.470086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.470104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.470325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.470344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.470518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.470538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.470783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.470802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.471914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.471932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.472183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.472202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.472333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.472353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.472460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.472479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.472637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.472656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.472758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.472778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.473958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.473977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.474223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.474242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.474432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.474452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.474573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.474592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.474696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.474716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.474960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.474979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.475233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.880 [2024-07-16 00:56:54.475251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.880 qpair failed and we were unable to recover it. 00:30:36.880 [2024-07-16 00:56:54.475445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.475464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.475778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.475797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.476014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.476033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.476152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.476172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.476367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.476387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.476632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.476652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.476900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.476919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.477162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.477182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.477378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.477398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.477602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.477621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.477808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.477827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.478124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.478143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.478352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.478372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.478594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.478613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.478807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.478826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.479878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.479898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.480111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.480311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.480331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.480506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.480525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.480818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.480836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.481081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.481101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.481291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.481310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.481499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.481518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.481703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.481726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.481859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.481878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.482935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.482954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.483144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.483163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.483282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.483301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.881 qpair failed and we were unable to recover it. 00:30:36.881 [2024-07-16 00:56:54.483431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.881 [2024-07-16 00:56:54.483450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.483576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.483594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.483746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.483765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.483907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.483926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.484056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.484075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.484329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.484349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.484469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.484488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.484670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.484689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.484833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.484852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.485831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.485850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.486992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.487172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.487395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.487518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.487722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.487932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.487951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.488167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.488186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.488374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.488393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.488513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.488533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.488746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.488764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.488971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.488991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.489137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.489279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.489423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.489619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.882 [2024-07-16 00:56:54.489841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.882 qpair failed and we were unable to recover it. 00:30:36.882 [2024-07-16 00:56:54.489959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.489978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.490243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.490268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.490399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.490418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.490539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.490558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.490714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.490733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.490909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.490928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.491154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.491173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.491348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.491367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.491545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.491565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.491859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.491878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.492922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.492941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.493064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.493083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.493279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.493298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.493515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.493535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.493716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.493736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.493985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.494007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.494186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.494205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.494454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.494473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.494772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.494791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.494975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.494996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.495249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.883 [2024-07-16 00:56:54.495275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.883 qpair failed and we were unable to recover it. 00:30:36.883 [2024-07-16 00:56:54.495624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.495643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.495843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.495862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.496080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.496099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.496276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.496543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.496562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.496812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.496831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.497018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.497037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.497250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.497284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.497552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.497571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.497692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.497711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.497897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.497916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.498192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.498211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.498370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.498390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.498588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.498607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.498725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.498921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.498939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.499120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.499140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.499279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.499298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.499536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.499554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.499769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.499789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.500053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.500072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.500265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.500285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.500411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.500430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.500703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.500721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.500838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.500857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.501828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.501847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.502029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.502047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.502161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.502180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.502308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.502328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.502543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.502565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.884 [2024-07-16 00:56:54.502687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.884 [2024-07-16 00:56:54.502706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.884 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.502827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.502846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.503098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.503117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.503333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.503352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.503542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.503560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.503741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.503760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.503949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.503968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.504160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.504289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.504497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.504603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.504864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.504994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.505013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.505158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.505177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.505429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.505448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.505604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.505622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.505793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.505812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.506962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.506980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.507101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.507121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.507245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.507270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.507451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.507471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.507703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.507911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.507930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.508172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.508191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.508346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.508365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.508617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.508636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.508765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.508783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.508980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.508999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.509094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.509113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.509225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.509243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.509376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.509396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.509645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.509664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.885 qpair failed and we were unable to recover it. 00:30:36.885 [2024-07-16 00:56:54.509777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.885 [2024-07-16 00:56:54.509797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.509980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.509999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.510126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.510148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.510281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.510300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.510494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.510513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.510700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.510719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.510893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.510912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.511113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.511132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.511407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.511427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.511630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.511649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.511827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.511846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.512041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.512338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.512541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.512747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.512861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.512992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.513012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.513194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.513213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.513399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.513419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.513597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.513616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.513828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.513848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.514045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.514064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.514252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.514280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.514473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.514492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.514674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.514693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.515036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.515055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.515270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.515290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.515511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.515529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.515668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.515687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.515962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.515981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.516252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.516390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.516409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.516523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.516542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.516766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.516785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.516894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.517036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.517056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.517253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.517292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.517469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.517488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.517689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.886 [2024-07-16 00:56:54.517708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.886 qpair failed and we were unable to recover it. 00:30:36.886 [2024-07-16 00:56:54.517903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.517923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.518111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.518130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.518317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.518336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.518489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.518513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.518759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.518778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.518895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.518915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.519121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.519140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.519343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.519363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.519493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.519512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.519771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.519790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.519914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.519934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.520189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.520209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.520394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.520413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.520514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.520533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.520739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.520758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.520955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.520974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.521103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.521122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.521371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.521390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.521665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.521684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.521803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.521822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.522009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.522028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.522327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.522346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.522644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.522663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.522792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.522811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.522995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.523235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.523457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.523666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.523793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.523931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.523950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.526500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.526522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.526703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.526723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.526844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.526863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.526980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.526999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.527271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.527291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.527539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.527558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.527749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.527768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.527973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.527993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.528113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.528132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.528308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.528327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.528512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.528531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.528816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.528835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.528975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.887 [2024-07-16 00:56:54.528994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.887 qpair failed and we were unable to recover it. 00:30:36.887 [2024-07-16 00:56:54.529135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.529157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.529344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.529363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.529548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.529567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.529838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.529857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.529975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.529994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.530872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.530890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.531076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.531096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.531212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.531231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.531432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.531452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.531731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.531750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.532034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.532053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.532269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.532493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.532512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.532706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.532725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.532915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.532934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.533051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.533070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.533291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.533310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.533430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.533449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.533648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.533667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.533862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.533881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.534056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.534075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.534176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.534197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.534478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.534498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.534654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.534672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.534858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.535065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.535084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.535268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.535288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.535462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.535481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.535753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.535772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.535903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.535921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.536031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.536050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.536249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.888 [2024-07-16 00:56:54.536276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.888 qpair failed and we were unable to recover it. 00:30:36.888 [2024-07-16 00:56:54.536398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.536416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.536717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.536736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.536910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.537185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.537208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.537395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.537414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.537602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.537621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.537823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.537842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.538030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.538049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.538242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.538268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.538456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.538475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.538665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.538684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.538812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.538831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.539079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.539097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.539212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.539231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.539426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.539446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.539632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.539652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.539810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.539829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.540043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.540062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.540277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.540297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.540481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.540501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.540709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.540728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.541023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.541042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.541339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.541359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.541554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.541574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.541837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.541855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.542135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.542154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.542361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.542381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.542605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.542625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.542870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.542889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.543828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.543847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.544788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.544984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.545003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.545268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.545288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.889 qpair failed and we were unable to recover it. 00:30:36.889 [2024-07-16 00:56:54.545422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.889 [2024-07-16 00:56:54.545445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.545643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.545663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.545913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.545931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.546106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.546125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.546315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.546334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.546535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.546554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.546822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.546842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.546956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.546975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.547219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.547237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.547448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.547467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.547572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.547591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.547832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.547851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.548023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.548047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.548289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.548329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.548545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.548571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.548709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.548733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.548941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.548965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.549106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.549135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.549352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.549386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.549566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.549589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.549812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.549836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.549985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.550195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.550426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.550558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.550750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.550926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.550948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.551115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.551143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.551406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.551432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.551542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.551566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.551762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.552055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.552081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.552266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.552313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.552526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.552556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.552688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.552718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.552904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.553093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.553131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.553603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.553800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.554175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.554250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.554471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.554504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.554794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.554860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.555041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.555070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.555276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.555306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.555479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.555512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.890 [2024-07-16 00:56:54.555675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.890 [2024-07-16 00:56:54.555704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.890 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.555873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.555904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.556072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.556267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.556297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.556464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.556493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.556660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.556690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.556854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.556883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.557064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.557231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.557439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.557638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.557813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.557969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.558000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.558144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.558174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.558335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.558362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.558705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.558725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.558933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.558952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.559131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.559150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.559329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.559349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.559467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.559486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.559704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.559724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.559850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.559869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.560068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.560087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.561673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.561725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.562019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.562052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.562350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.562383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.562680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.562712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.562924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.562956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.563137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.563359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.563506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.563688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.563863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.563994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.564024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.564183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.564214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.564478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.564511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.564652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.564688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.564930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.564961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.565095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.565114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.565314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.565345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.565644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.565675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.565915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.565946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.566140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.566159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.566372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.891 [2024-07-16 00:56:54.566392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.891 qpair failed and we were unable to recover it. 00:30:36.891 [2024-07-16 00:56:54.567508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.567542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.567865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.567903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.568177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.568208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.568447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.568483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.568683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.568714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.568924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.568955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.569163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.569182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.569356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.569377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.569563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.569582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.569752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.569770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.569873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.569893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.570137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.570168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.570316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.570347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.570483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.570513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.570728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.570758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.570999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.571030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.571282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.571302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.571487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.571506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.571633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.571652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.571848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.571879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.572012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.572043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.572276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.572307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.572545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.572586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.572760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.572779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.573067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.573097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.573321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.573353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.573561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.573592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.573860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.573890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.574102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.892 [2024-07-16 00:56:54.574133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.892 qpair failed and we were unable to recover it. 00:30:36.892 [2024-07-16 00:56:54.574378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.574410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.574626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.574657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.574840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.574871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.575078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.575114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.575264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.575296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.575546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.575576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.575828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.575847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.576037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.576056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.576329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.576361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.576570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.576601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.576822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.576853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.577128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.577159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.577514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.577546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.577719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.577750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.577965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.577996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.578234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.578261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.578371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.578390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.578670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.578702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.579001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.579032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.579263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.579296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.579444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.579475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.579774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.579805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.580019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.580038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.580301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.580333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.580543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.580573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.580856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.580875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.581151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.581182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.581385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.581418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.581644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.581675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.581808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.893 [2024-07-16 00:56:54.581839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.893 qpair failed and we were unable to recover it. 00:30:36.893 [2024-07-16 00:56:54.582064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.582083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.582196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.582226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.582382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.582413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.582542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.582573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.582769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.582800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.583004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.583035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.583175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.583194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.583386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.583406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.584566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.584597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.584726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.584746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.584991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.585031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.585277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.585309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.585611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.585642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.585809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.585846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.586092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.586123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.586424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.586457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.586709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.586739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.586955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.586986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.587288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.587320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.587551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.587582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.587766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.587797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.588025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.588044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.588229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.588270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.588518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.588550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.588823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.588855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.589109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.589128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.589387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.589408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.589543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.589681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.589725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.589878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.589909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.894 qpair failed and we were unable to recover it. 00:30:36.894 [2024-07-16 00:56:54.590135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.894 [2024-07-16 00:56:54.590166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.590463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.590495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.590706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.590726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.590852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.590884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.591159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.591190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.591360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.591392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.591600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.591630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.591873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.591892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.592179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.592210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.592380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.592412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.592750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.592782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.592989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.593008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.593219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.593239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.593524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.593544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.593670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.593690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.593816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.593835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.594014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.594055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.594295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.594327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.594551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.594582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.594803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.594834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.595060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.595091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.595245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.595286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.595435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.595466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.595718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.595755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.595957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.595988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.596145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.596176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.596388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.596420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.596532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.596563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.596762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.596793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.597030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.597050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.597356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.597388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.597687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.597718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.597857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.597889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.895 [2024-07-16 00:56:54.598159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.895 [2024-07-16 00:56:54.598189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.895 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.598433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.598466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.598617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.598647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.598802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.598834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.598994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.599246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.599286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.599495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.599526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.599740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.599771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.599976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.599996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.600206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.600237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.600396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.600428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.600697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.600729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.601001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.601033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.601201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.601233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.601378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.601410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.601562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.601592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.601808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.601839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.602053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.602085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.602213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.602242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.602397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.602429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.602727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.602758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.602906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.602937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.603093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.603124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.603436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.603468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.603607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.603638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.603782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.603814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.603952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.603983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.604123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.604142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.604328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.604360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.604583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.896 [2024-07-16 00:56:54.604615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.896 qpair failed and we were unable to recover it. 00:30:36.896 [2024-07-16 00:56:54.604830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.604861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.605061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.605081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.605343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.605376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.605535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.605568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.605854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.605885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.606047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.606077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.606278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.606469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.606500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.606610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.606642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.606907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.606926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.607005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.607023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.607113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.607131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.607376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.607396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.607652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.607689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.607945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.607976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.609777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.609813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.610065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.610086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.610269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.610289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.897 [2024-07-16 00:56:54.610414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.897 [2024-07-16 00:56:54.610444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.897 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.610661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.610692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.610904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.610935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.611141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.611337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.611357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.611462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.611481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.611680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.611698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.611931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.611961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.612107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.612137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.612339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.612377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.612612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.612643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.612964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.613003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.613274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.613294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.613569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.613588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.613718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.613929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.613948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.614142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.614161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.614351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.614371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.614483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.614501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.614619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.614638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.614842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.614873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.615015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.615045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.615250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.615292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.615485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.615515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.615658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.615689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.615930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.615959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.616093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.898 [2024-07-16 00:56:54.616111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.898 qpair failed and we were unable to recover it. 00:30:36.898 [2024-07-16 00:56:54.616300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.616319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.616502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.616521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.616709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.616728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.616860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.616879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.617042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.617073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.617315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.617347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.617579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.617610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.617821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.618110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.618141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.618323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.618354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.619561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.619593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.619895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.619915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.620094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.620113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.620363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.620395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.620633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.620663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.620844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.620875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.621089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.621130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.621279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.621300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.621522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.621542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.621789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.621820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.621962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.621993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.622212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.622243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.622537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.622867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.622897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.623058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.623087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.623244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.623286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.623498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.623528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.623750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.623781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.624020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.624050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.624182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.624213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.624382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.624413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.624619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.624649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.624802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.899 [2024-07-16 00:56:54.624821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.899 qpair failed and we were unable to recover it. 00:30:36.899 [2024-07-16 00:56:54.625008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.625145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.625349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.625578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.625774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.625918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.625937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.627076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.627107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.627400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.627421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.627695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.627715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.627858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.627877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.628067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.628098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.628269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.628301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.628540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.628570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.628761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.628792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.628962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.628981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.629163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.629465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.629500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.630641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.630672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.630801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.630821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.631033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.631054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.631243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.631271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.631399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.631419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.631642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.631673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.631886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.631917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.632139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.632158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.632365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.632385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.632571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.632591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.632717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.900 [2024-07-16 00:56:54.632736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.900 qpair failed and we were unable to recover it. 00:30:36.900 [2024-07-16 00:56:54.632957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.632988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.633282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.633320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.633525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.633760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.633791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.634046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.634077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.634289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.634321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.634529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.634560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.634703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.634733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.634860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.634891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.635110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.635142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.635296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.635328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.635547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.635578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.635851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.635883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.636082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.636113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.636280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.636312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.636464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.636495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.636740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.636771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.636980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.637010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.637210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.637242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.637584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.637617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.637939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.637970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.638168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.638199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.638352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.638384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.638679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.638710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.638853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.638884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.639027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.639046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.639326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.639346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.639564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.639595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.639822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.639854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.640012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.640032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.640305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.640337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.640580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.640612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.640744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.640776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.901 qpair failed and we were unable to recover it. 00:30:36.901 [2024-07-16 00:56:54.640986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.901 [2024-07-16 00:56:54.641006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.641205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.641236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.641588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.641620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.641764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.641795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.642067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.642099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.642309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.642329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.642464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.642495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.642715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.642747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.642895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.642918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.643163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.643183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.643362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.643394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.643641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.643671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.643870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.643889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.644075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.644388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.644420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.644659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.644690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.644905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.644925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.645179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.645210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.645558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.645592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.645747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.645778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.646102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.646133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.646358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.646379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.646580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.646600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.646731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.646762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.646979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.647010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.647328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.647360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.647567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.647597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.647726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.647756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.647995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.648014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.648137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.648169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.648388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.648420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.648558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.648822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.648853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.902 qpair failed and we were unable to recover it. 00:30:36.902 [2024-07-16 00:56:54.649004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.902 [2024-07-16 00:56:54.649035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.649250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.649291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.649443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.649474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.649619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.649650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.649802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.649833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.650002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.650021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.650206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.650237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.650561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.650593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.650823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.650854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.651084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.651114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.651312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.651332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.651472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.651504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.651727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.651758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.652015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.652046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.652276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.652308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.652452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.652487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.652743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.652774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.652924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.652942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.653083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.653102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.653225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.653245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.653465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.653485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.653671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.653691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.653883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.653913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.654042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.903 [2024-07-16 00:56:54.654073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.903 qpair failed and we were unable to recover it. 00:30:36.903 [2024-07-16 00:56:54.654329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.654361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.654513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.654544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.654691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.654721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.654846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.654865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.654978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.654998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.655119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.655139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.655337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.655369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.655525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.655556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.655707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.655738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.655876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.655906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.656057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.656098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.656279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.656299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.656486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.656505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.656689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.656719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.656931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.656961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.657161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.657192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.657368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.657401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.657548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.657579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.657809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.657840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.658007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.658038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.658181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.658212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.658431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.658462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.658593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.658624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.658856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.658887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.660104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.660137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.660334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.660354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.660488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.660508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.660798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.660830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.661105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.661136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.661358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.904 [2024-07-16 00:56:54.661391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.904 qpair failed and we were unable to recover it. 00:30:36.904 [2024-07-16 00:56:54.661593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.661624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.661833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.661871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.662117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.662334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.662354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.662545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.662748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.662768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.662894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.662914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.663918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.663950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.664218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.664489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.664521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.664670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.664700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.664935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.664975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.665106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.665125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.665211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.665229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.665457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.665477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.665673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.665703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.665915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.665946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.666214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.666277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.666528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.666548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.666729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.666748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.667028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.667058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.667271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.667450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.667469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.667647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.667682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.905 qpair failed and we were unable to recover it. 00:30:36.905 [2024-07-16 00:56:54.667827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.905 [2024-07-16 00:56:54.667858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.668182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.668213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.668453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.668472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.668709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.668739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.669101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.669132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.669386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.669406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.669596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.669614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.669798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.669828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.670048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.670079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.670283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.670315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.670469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.670489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.670697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.670716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.670902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.670925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.671116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.671147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.671299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.671330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.671540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.671571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.671733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.671764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.672037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.672068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.672290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.672322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.672455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.672486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.672645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.672675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.673990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.674022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.674333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.674355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.674463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.674482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.674624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.674665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.674964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.674995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.675151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.675182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.675488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.675508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.675722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.675741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.675866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.675886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.676155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.676175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.676299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.676506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.676525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.676743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.676762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.676955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.676986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.677152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.677183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.677394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.677425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.677645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.677676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.677892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.677922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.678186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.678269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.678513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.678547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.678749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.678781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.678904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.678935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.906 qpair failed and we were unable to recover it. 00:30:36.906 [2024-07-16 00:56:54.679079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.906 [2024-07-16 00:56:54.679110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.679328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.679360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.679398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9070 (9): Bad file descriptor 00:30:36.907 [2024-07-16 00:56:54.679722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.679792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.680031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.680065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.680300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.680334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.680558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.680589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.680794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.680815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.681026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.681206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.681237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.681462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.681493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.681660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.681690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.681844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.681875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.682156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.682187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.682414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.682445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.682720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.682751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.682954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.682985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.683195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.683214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.683355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.683375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.683555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.683574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.683754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.683773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.683896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.683937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.684069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.684099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.684307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.684344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.684560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.684592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.684753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.684784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.684987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.685018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.685229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.685268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.685594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.685625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.685783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.685814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.685944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.685974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.686121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.686152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.686299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.686320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.686441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.686460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.686692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.686712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.686895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.686914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.687157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.907 [2024-07-16 00:56:54.687176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.907 qpair failed and we were unable to recover it. 00:30:36.907 [2024-07-16 00:56:54.687451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.687471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.687599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.687617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.687713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.687732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.687925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.687945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.688072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.688092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.688214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.688233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.688419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.688450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.688673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.688703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.688981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.689011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.689203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.689234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.689542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.689572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.689781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.689812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.690014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.690044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.690262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.690328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.690556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.690590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.692123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.692173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.692424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.692445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.692648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.692678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.692831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.692861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.693071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.693101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.693294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.693325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.693560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.693591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.693733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.908 [2024-07-16 00:56:54.693764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:36.908 qpair failed and we were unable to recover it. 00:30:36.908 [2024-07-16 00:56:54.693982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.694014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.694251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.694291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.695835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.695870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.696073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.696097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.696349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.696382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.697621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.697652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.697879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.697899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.698146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.698165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.698288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.698307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.698565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.698583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.698710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.698748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.698995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.699025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.699199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.699229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.699533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.699555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.699733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.699752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.699959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.699978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.700172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.700192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.700386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.700407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.700547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.700566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.700755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.700774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.700947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.700966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.701095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.701126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.701291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.701323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.701554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.701585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.701831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.701862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.702073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.702103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.702408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.702450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.702647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.702678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.702979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.703009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.703150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.703182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.703402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.703435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.703709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.703744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.703959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.703990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.704200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.704231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.704465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.704484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.704740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.704759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.704879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.704899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.705025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.188 [2024-07-16 00:56:54.705045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.188 qpair failed and we were unable to recover it. 00:30:37.188 [2024-07-16 00:56:54.705296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.705316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.705490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.705510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.705712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.705742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.705891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.705922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.706122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.706141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.706262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.706285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.706386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.706405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.706600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.706619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.706861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.706880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.707063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.707083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.707271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.707291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.707408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.707427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.707602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.707621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.707878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.707897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.708966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.708999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.709290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.709310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.709533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.709553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.709732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.709763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.710006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.710037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.710276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.710309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.710510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.710540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.710841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.710871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.711024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.711054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.711604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.711628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.711799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.711819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.711996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.712015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.712199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.712218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.712422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.712443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.712601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.712621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.712816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.712835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.713103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.713123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.713337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.713357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.713585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.713617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.713789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.713819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.713967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.713998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.714209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.714227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.714440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.714472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.714676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.714707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.714857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.714887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.715097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.715116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.715305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.715324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.715473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.715504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.715727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.715757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.715958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.715988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.716139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.716170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.716303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.716341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.716580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.716736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.716755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.716861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.716880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.717093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.717123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.717418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.717450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.717725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.717755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.717956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.717987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.718141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.718160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.718290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.718309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.718419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.718438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.718597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.718616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.718810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.718840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.719048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.719079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.719309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.719340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.719580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.719611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.189 qpair failed and we were unable to recover it. 00:30:37.189 [2024-07-16 00:56:54.719822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.189 [2024-07-16 00:56:54.719852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.719981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.720012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.720219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.720250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.720523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.720557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.720688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.720720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.720935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.720964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.721112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.721131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.721294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.721314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.721488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.721507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.721716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.721736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.721865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.721884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.722239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.722317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.722641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.722677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.722847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.722878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.723064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.723095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.723248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.723288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.723515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.723545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.723767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.723788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.724080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.724100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.724344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.724364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.724514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.724546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.724758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.724789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.725048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.725303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.725486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.725730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.725877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.725986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.726248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.726520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.726655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.726810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.726948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.726968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.727089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.727108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.727405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.727436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.727639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.727669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.727894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.727926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.728078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.728109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.728380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.728414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.728658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.728970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.729000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.729285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.729318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.729527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.729558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.729771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.729802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.730070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.730101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.730334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.730366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.730606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.730625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.730896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.730916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.731157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.731176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.731470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.731502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.731796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.731828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.732065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.732096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.732222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.732253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.732400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.732431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.732636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.732655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.732837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.732867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.733154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.733185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.733346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.733365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.733475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.733495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.733713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.733744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.733890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.733921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.734082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.734114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.734276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.190 [2024-07-16 00:56:54.734321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.190 qpair failed and we were unable to recover it. 00:30:37.190 [2024-07-16 00:56:54.734446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.734465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.734766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.734802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.735006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.735036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.735281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.735313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.735606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.735647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.735798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.735829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.736032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.736063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.736333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.736377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.736588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.736619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.736866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.736897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.737048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.737079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.737340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.737359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.737537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.737556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.737673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.737693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.737905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.737941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.738272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.738305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.738532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.738563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.738723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.738755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.739002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.739032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.739323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.739343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.739608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.739627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.739876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.739895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.740093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.740356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.740388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.740535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.740565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.740770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.740801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.741001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.741032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.741329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.741348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.741640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.741887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.741918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.742135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.742154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.742289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.742308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.742493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.742524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.742760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.742792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.742993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.743024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.743222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.743266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.743587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.743618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.743829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.743860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.744086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.744118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.744228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.744271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.744424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.744444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.744620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.744639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.744823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.744854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.745001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.745032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.745185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.745227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.745480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.745499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.745668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.745699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.745831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.745863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.746102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.746133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.746442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.746475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.746624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.746655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.746965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.746996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.747141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.191 [2024-07-16 00:56:54.747172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.191 qpair failed and we were unable to recover it. 00:30:37.191 [2024-07-16 00:56:54.747413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.747445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.747707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.747738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.747944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.747975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.748186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.748205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.748503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.748523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.748714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.748734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.748939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.748958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.749140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.749170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.749389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.749420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.749627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.749658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.749857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.749888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.750022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.750042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.750162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.750181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.750444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.750464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.750614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.750633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.750895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.750931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.751182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.751212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.751460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.751480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.751660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.751679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.751950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.751986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.752307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.752339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.752568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.752599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.752820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.752851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.753118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.753149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.753336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.753356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.753497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.753528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.753730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.753761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.753958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.753988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.754284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.754316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.754521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.754553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.754772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.754803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.755004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.755036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.755246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.755274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.755534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.755553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.755747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.756029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.756047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.756239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.756266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.756529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.756548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.756668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.756686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.756848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.756868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.757085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.757115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.757277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.757309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.757531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.757563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.757794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.757825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.757996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.758027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.758318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.758350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.758491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.758510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.758682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.758701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.758927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.758958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.759274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.759307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.759501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.759520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.759822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.759853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.760007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.760037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.760309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.760341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.760601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.760632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.760846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.760882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.761193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.761223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.761414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.761446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.761716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.761747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.761940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.761970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.762213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.762244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.762441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.762472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.192 qpair failed and we were unable to recover it. 00:30:37.192 [2024-07-16 00:56:54.762604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.192 [2024-07-16 00:56:54.762623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.762818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.762837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.762960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.762979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.763170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.763190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.763374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.763394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.763574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.763593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.763854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.763884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.764043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.764074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.764347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.764379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.764668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.764687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.764803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.764822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.765891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.765910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.766041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.766061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.766241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.766289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.766455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.766486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.766651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.766682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.767005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.767035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.767312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.767344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.767554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.767584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.767784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.767815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.768072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.768103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.768323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.768343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.768540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.768560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.768676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.768707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.768910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.768940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.769266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.769286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.769394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.769414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.769665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.769684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.769880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.769902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.770098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.770117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.770226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.770243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.770432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.770451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.770588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.770627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.770944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.770975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.771207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.771238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.771472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.771503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.771712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.771743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.771893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.771923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.772208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.772240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.772453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.772472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.772720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.772750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.772957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.772988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.773208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.773239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.773516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.773536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.773657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.773688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.773829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.773860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.774061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.774091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.774285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.774305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.774595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.774625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.774842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.774873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.775179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.775210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.775440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.775471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.775763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.775795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.776068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.776099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.776314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.776351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.193 [2024-07-16 00:56:54.776564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.193 [2024-07-16 00:56:54.776596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.193 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.776839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.776870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.777113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.777143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.777371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.777402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.777532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.777552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.777738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.777757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.777976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.777996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.778273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.778292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.778416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.778435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.778628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.778659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.778952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.778983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.779187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.779218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.779456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.779488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.779771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.779806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.780008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.780039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.780286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.780318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.780594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.780633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.780860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.780891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.781052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.781082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.781405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.781437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.781702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.781721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.781898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.781917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.782073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.782104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.782222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.782281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.782557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.782589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.782794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.782824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.783053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.783084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.783323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.783343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.783502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.783521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.783708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.783727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.784023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.784042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.784252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.784278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.784488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.784519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.784672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.784702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.784919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.784950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.785153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.785184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.785412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.785444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.785647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.785678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.785885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.785904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.786092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.786112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.786306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.786338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.786498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.786529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.786735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.786765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.786970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.787002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.787244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.787287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.787576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.787611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.787810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.787841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.788044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.788074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.788234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.788275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.788491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.788522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.788811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.788841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.789134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.789164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.789364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.789384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.789608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.789644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.789914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.789945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.790181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.790222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.790412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.790432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.790542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.790562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.790811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.790830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.194 [2024-07-16 00:56:54.791134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.194 [2024-07-16 00:56:54.791165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.194 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.791460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.791492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.791810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.791841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.792055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.792085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.792285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.792319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.792520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.792539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.792797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.792816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.793000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.793031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.793269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.793301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.793541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.793573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.793848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.793866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.794136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.794155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.794307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.794327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.794501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.794521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.794765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.794785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.795034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.795064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.795305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.795324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.795517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.795548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.795781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.795811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.796011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.796043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.796253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.796292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.796513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.796544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.796820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.796851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.797148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.797178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.797309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.797341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.797574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.797605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.797874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.797904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.798141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.798172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.798417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.798437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.798625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.798645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.798790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.798809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.798914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.798952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.799229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.799283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.799494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.799525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.799730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.799752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.799943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.799962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.800174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.800193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.800385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.800405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.800606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.800625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.800887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.800907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.801082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.801197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.801228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.801497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.801566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.801725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.801760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.802045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.802076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.802291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.802324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.802530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.802561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.802863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.802894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.803216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.803250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.803416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.803446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.803594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.803625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.803901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.804178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.804219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.804492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.804536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.804748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.804779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.805034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.805064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.805381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.805423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.805697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.805716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.805821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.805841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.806063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.806082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.806215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.806244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.195 qpair failed and we were unable to recover it. 00:30:37.195 [2024-07-16 00:56:54.806407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.195 [2024-07-16 00:56:54.806437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.806705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.806736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.806880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.806910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.807059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.807090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.807302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.807333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.807497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.807528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.807806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.807837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.808057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.808088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.808296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.808327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.808617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.808649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.808820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.808850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.809069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.809100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.809317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.809349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.809652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.809691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.810030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.810060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.810342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.810374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.810585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.810616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.810919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.810950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.811169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.811199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.811471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.811504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.811716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.811747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.811917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.811947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.812142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.812174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.812399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.812419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.812677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.812697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.812936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.812967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.813152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.813182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.813409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.813441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.813695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.813715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.813857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.813876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.814071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.814090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.814267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.814287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.814369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.814387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.814676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.814706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.814850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.814881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.815178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.815209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.815408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.815428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.815553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.815582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.815796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.815826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.815971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.816003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.816289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.816321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.816552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.816621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.816939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.816973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.817115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.817147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.817412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.817433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.817653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.817672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.817916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.817936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.818061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.818080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.818213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.818232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.818367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.818387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.818582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.818613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.818830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.818861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.819131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.819162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.819361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.819399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.819548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.819579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.819789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.819820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.820103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.820134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.196 qpair failed and we were unable to recover it. 00:30:37.196 [2024-07-16 00:56:54.820334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.196 [2024-07-16 00:56:54.820366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.820622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.820641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.820850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.820869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.821048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.821067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.821358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.821390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.821605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.821635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.821802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.821821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.822103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.822122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.822323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.822343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.822590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.822609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.822829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.822848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.822978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.822998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.823199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.823230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.823475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.823506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.823737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.823768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.824001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.824031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.824266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.824298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.824588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.824656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.824825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.824860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.825064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.825096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.825230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.825252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.825452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.825483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.825794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.826034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.826067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.826291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.826324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.826475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.826506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.826796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.826832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.827053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.827084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.827327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.827358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.827536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.827567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.827770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.827801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.828097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.828127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.828360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.828401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.828647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.828667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.828869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.828888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.829081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.829101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.829364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.829412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.829652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.829683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.829923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.829954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.830175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.830206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.830408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.830439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.830594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.830613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.830830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.830849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.831032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.831063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.831276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.831308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.831580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.831611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.831946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.831977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.832179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.832210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.832366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.832386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.832517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.832537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.832818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.832849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.833164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.833194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.833328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.833348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.833642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.833673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.833884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.833915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.834105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.834136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.834356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.834375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.834626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.834645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.834770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.834789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.834981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.835001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.835279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.835299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.835435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.835454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.835683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.835713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.197 [2024-07-16 00:56:54.835869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.197 [2024-07-16 00:56:54.835901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.197 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.836057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.836088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.836310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.836341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.836559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.836591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.836754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.836784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.836988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.837020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.837166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.837197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.837501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.837531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.837807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.837838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.838045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.838075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.838377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.838396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.838604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.838634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.838904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.838936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.839172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.839385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.839548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.839666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.839856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.839986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.840017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.840148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.840179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.840414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.840464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.840624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.840643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.840899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.840919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.841128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.841147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.841298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.841318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.841564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.841584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.841876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.841906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.842166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.842197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.842517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.842786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.842816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.843034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.843065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.843203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.843234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.843446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.843478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.843775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.843806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.843944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.843975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.844101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.844133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.844358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.844390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.844611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.844641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.844927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.844958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.845243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.845282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.845526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.845546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.845725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.845757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.846024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.846055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.846197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.846227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.846542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.846573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.846842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.846885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.847079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.847099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.847275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.847296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.847547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.847566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.847836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.847867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.848106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.848137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.848354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.848385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.848693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.848724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.848943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.848978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.849129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.849160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.849431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.849462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.849759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.849789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.850020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.850050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.850291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.850322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.850519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.850549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.850768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.851069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.851099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.851298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.851330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.198 [2024-07-16 00:56:54.851609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.198 [2024-07-16 00:56:54.851628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.198 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.851918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.851949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.852228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.852285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.852432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.852463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.852665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.852684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.852934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.852965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.853236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.853285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.853499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.853518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.853760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.853779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.854041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.854060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.854243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.854503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.854522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.854770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.854801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.854936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.854966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.855184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.855215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.855492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.855511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.855760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.855779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.855954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.855973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.856158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.856177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.856420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.856440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.856629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.856660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.856928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.856959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.857251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.857292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.857591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.857622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.857900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.857930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.858069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.858100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.858300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.858332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.858553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.858584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.858887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.858917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.859084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.859114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.859276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.859317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.859519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.859537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.859766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.859786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.860011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.860030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.860243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.860279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.860474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.860505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.860704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.860736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.860883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.860924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.861047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.861066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.861240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.861266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.861460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.861480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.861724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.861744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.861937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.862148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.862167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.862299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.862319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.862537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.862568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.862711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.862742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.862997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.863028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.863239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.863279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.863576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.863606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.863875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.863905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.864203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.864234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.864546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.864578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.864852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.864871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.199 [2024-07-16 00:56:54.865146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.199 [2024-07-16 00:56:54.865184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.199 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.865383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.865415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.865685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.865727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.865842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.865862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.866061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.866080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.866323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.866343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.866461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.866480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.866735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.866754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.866917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.866936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.867167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.867357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.867376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.867562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.867592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.867895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.867926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.868124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.868155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.868401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.868431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.868652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.868682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.868924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.868954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.869100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.869131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.869428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.869460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.869612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.869908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.869928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.870116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.870136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.870268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.870288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.870498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.870530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.870731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.870762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.870998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.871029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.871252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.871293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.871579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.871598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.871770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.871789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.872063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.872094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.872398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.872431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.872630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.872661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.872951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.872981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.873280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.873311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.873606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.873637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.873903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.873933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.874151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.874181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.874450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.874481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.874733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.874752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.875952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.875971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.876163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.876182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.876470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.876490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.876764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.876808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.876953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.876984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.877180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.877211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.877424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.877456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.877612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.877643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.877806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.877837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.878054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.878086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.878380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.878411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.878532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.878732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.878751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.879026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.879070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.879273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.879305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.879597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.879616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.200 [2024-07-16 00:56:54.879826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.200 [2024-07-16 00:56:54.879846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.200 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.880141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.880160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.880352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.880384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.880589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.880608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.880887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.880907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.881027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.881047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.881251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.881291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.881521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.881552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.881802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.881833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.882068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.882099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.882331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.882363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.882577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.882608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.882764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.882783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.882947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.882977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.883197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.883228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.883533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.883565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.883763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.883792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.884011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.884042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.884240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.884281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.884491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.884511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.884641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.884660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.884920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.884950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.885168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.885199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.885502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.885538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.885756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.885775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.885970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.885989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.886108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.886127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.886337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.886368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.886576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.886606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.886897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.886929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.887090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.887121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.887415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.887447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.887714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.887745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.888035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.888066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.888298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.888329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.888578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.888646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.888899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.888933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.889152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.889184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.889477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.889816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.889846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.890121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.890152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.890299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.890330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.890546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.890576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.890846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.890877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.891089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.891119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.891263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.891294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.891512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.891543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.891753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.891785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.891994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.892024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.892167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.892197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.892521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.892560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.892753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.892772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.893044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.893082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.893298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.893330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.893598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.893639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.893804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.893823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.894032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.894063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.894283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.894315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.894538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.894568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.894693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.894724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.894990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.201 [2024-07-16 00:56:54.895021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.201 qpair failed and we were unable to recover it. 00:30:37.201 [2024-07-16 00:56:54.895222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.895253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.895475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.895506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.895716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.895752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.895988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.896008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.896227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.896246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.896384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.896403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.896621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.896640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.896831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.896861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.897115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.897146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.897446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.897477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.897683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.897713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.897895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.897925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.898170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.898202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.898452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.898483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.898707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.898726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.899025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.899055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.899201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.899234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.899513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.899543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.899763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.899793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.899942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.899972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.900171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.900202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.900459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.900479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.900725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.900755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.900972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.901003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.901204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.901234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.901371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.901402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.901712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.901743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.901951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.901981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.902197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.902228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.902444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.902475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.902676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.902695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.902943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.902974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.903283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.903314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.903527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.903558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.903719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.903749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.903890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.903909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.904214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.904244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.904559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.904591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.904817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.904847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.905061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.905091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.905363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.905394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.905685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.905725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.905908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.905945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.906197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.906228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.906545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.906576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.906847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.906878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.907097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.907127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.907369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.907400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.907693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.907723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.907940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.907970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.908273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.908304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.908516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.908546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.908762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.908792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.909088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.909106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.202 qpair failed and we were unable to recover it. 00:30:37.202 [2024-07-16 00:56:54.909318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.202 [2024-07-16 00:56:54.909338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.909481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.909500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.909782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.909813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.910016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.910047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.910290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.910321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.910578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.910609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.910806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.910837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.911050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.911069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.911365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.911385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.911592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.911611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.911868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.911898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.912108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.912139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.912346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.912377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.912645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.912664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.912951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.912981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.913123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.913153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.913361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.913392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.913590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.913621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.913845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.913875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.914027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.914058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.914276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.914308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.914505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.914535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.914774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.914957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.914976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.915265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.915296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.915570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.915600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.915872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.915903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.916200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.916231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.916398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.916435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.916579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.916610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.916905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.916935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.917080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.917110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.917392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.917673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.917703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.917860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.917891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.918186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.918217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.918525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.918556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.918856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.918886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.919105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.919136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.919349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.919381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.919514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.919545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.919815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.919846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.920077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.920108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.920359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.920390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.920589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.920619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.920829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.920848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.920965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.920983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.921184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.921215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.921496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.921528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.921818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.921849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.922093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.922124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.922363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.922394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.922607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.922626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.922875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.922894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.923010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.923030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.923299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.923319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.923594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.923632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.923785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.923816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.924029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.924059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.924294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.924327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.924478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.924509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.924747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.924777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.925057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.925076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.203 [2024-07-16 00:56:54.925267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.203 [2024-07-16 00:56:54.925287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.203 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.925399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.925418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.925533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.925550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.925836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.925866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.926064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.926094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.926358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.926396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.926695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.926725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.926955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.926985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.927283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.927315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.927551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.927581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.927782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.927813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.927954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.927984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.928195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.928214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.928466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.928485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.928756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.928775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.928924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.928954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.929095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.929126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.929342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.929373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.929583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.929602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.929879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.929910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.930125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.930155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.930365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.930397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.930720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.930750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.930926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.930945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.931138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.931157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.931287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.931490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.931696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.931727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.931875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.931906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.932084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.932115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.932325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.932357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.932597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.932627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.932930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.932961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.933286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.933317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.933601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.933631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.933868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.933888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.934065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.934085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.934300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.934330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.934648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.934678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.934822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.934853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.935000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.935030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.935301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.935625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.935655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.935784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.935815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.936044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.936063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.936312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.936348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.936562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.936581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.936776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.936807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.937022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.937053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.937326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.937357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.937497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.937527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.937656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.937686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.937981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.938011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.938211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.938241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.938466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.938498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.938709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.938740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.939008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.939039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.939372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.939404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.939554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.939585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.939888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.939919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.940117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.940148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.940299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.940333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.940500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.204 [2024-07-16 00:56:54.940530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.204 qpair failed and we were unable to recover it. 00:30:37.204 [2024-07-16 00:56:54.940735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.940753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.940940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.940970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.941181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.941212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.941462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.941493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.941796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.941827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.942149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.942168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.942305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.942324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.942513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.942532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.942721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.942741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.942883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.942914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.943098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.943128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.943346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.943378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.943612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.943642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.943859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.943889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.944207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.944238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.944512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.944544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.944761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.944792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.945001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.945032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.945233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.945279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.945509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.945539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.945689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.945708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.945901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.945920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.946102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.946124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.946211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.946229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.946426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.946446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.946624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.946642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.946887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.946918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.947140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.947171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.947462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.947494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.947781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.947812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.948042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.948235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.948435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.948668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.948835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.948969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.949000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.949216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.949247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.949421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.949749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.949780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.949919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.949949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.950168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.950198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.950425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.950456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.950692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.950723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.950941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.950960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.951231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.951249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.951363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.951383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.951642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.951673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.951883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.951914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.952058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.952089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.952311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.952331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.952604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.952635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.952777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.952796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.952970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.952989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.953089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.953107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.953337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.953369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.953571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.205 [2024-07-16 00:56:54.953601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.205 qpair failed and we were unable to recover it. 00:30:37.205 [2024-07-16 00:56:54.953871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.953902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.954078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.954097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.954291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.954311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.954483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.954502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.954629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.954659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.954823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.954854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.955056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.955092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.955312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.955332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.955588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.955606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.955722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.955741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.955923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.955942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.956186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.956205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.956399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.956419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.956574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.956593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.956863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.956882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.957070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.957089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.957275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.957294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.957584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.957604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.957879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.957910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.958051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.958082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.958295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.958327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.958552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.958582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.958788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.958807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.958983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.959002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.959189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.959208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.959402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.959421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.959608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.959627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.959801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.960072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.960091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.960361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.960381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.960574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.960593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.960880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.960911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.961122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.961152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.961371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.961392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.961669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.961700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.961899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.961929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.962093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.962125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.962331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.962350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.962516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.962535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.962709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.962727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.962912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.962931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.963137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.963168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.963374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.963406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.963637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.963668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.963885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.963906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.964084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.964103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.964387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.964410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.964602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.964622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.964812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.964831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.965017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.965036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.965161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.965180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.965324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.965344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.965563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.965593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.965790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.965821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.966971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.966991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.967136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.967156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.967267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.967287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.967537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.206 [2024-07-16 00:56:54.967557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.206 qpair failed and we were unable to recover it. 00:30:37.206 [2024-07-16 00:56:54.967663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.967683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.967907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.967926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.968946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.968977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.969135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.969165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.969316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.969348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.969642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.969674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.969782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.969813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.970035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.970065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.970201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.970232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.970472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.970503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.970653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.970684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.970951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.970981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.971195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.971226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.971437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.971468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.971614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.971645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.971879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.971898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.972085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.972117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.972405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.972651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.972673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.972883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.972913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.973169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.973199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.973356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.973387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.973566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.973596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.973727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.973757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.973978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.974009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.974220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.974239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.974459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.974478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.974749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.974788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.975044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.975075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.975303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.975335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.975546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.975576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.975709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.975739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.976059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.976091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.976365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.976385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.976562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.976581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.976701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.976720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.976848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.976867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.977925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.977943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.978883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.978995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.979040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.979184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.979215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.979466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.979498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.979701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.979731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.979958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.979988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.980227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.980269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.980413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.980444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.207 [2024-07-16 00:56:54.980659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.207 [2024-07-16 00:56:54.980691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.207 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.980957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.980977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.981103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.981138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.981370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.981403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.981607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.981637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.981909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.981940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.982155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.982174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.982286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.982305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.982407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.982426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.982641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.982660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.982882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.982901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.983020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.983039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.983217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.983237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.983359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.983395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.983545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.983576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.983775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.983807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.984012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.984043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.984341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.984376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.984651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.984683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.984908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.984939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.985158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.985189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.985403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.985436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.985648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.985679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.985954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.985985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.986220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.986266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.986376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.986395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.986518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.986538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.986730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.986748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.986962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.987136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.987156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.987332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.987352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.987461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.987481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.987730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.987748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.987931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.987950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.988226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.988266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.988398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.988430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.988642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.988674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.988874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.988905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.989143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.989162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.989345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.989387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.989529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.989559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.989712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.989744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.989949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.989980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.990186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.990217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.990425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.990458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.990664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.990695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.990827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.990857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.991057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.991087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.991241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.991267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.991465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.991497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.991795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.991826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.992047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.992077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.992397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.992417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.992694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.992713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.992995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.993025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.993226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.993283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.993563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.993594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.993805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.993836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.208 [2024-07-16 00:56:54.994038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.208 [2024-07-16 00:56:54.994068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.208 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.994340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.994360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.994538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.994557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.994748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.994779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.994919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.994950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.995146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.995177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.996279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.996310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.996534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.996554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.996738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.996757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.996975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.996996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.997191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.997211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.997450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.997474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.997607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.997626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.997874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.997905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.998088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.998119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.998330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.998363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.998634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.998666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.998932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.998962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.999205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.999225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.999558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.999578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.999691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:54.999829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:54.999850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.000022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.000042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.000168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.000187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.000326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.000345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.000532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.000553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.000796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.000815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.001004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.001024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.001141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.001160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.001351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.001372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.002224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.002270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.002417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.002438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.002720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.002740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.002862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.002882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.003953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.003972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.004091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.004110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.004374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.004393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.004512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.004531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.004683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.004703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.004928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.004947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.005138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.005159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.005280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.005299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.005414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.005433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.005635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.005654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.005778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.005797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.006082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.006100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.006217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.006239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.006426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.006446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.006720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.006739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.006861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.006880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.007080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.007217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.007455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.007677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.007811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.209 [2024-07-16 00:56:55.007996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.209 [2024-07-16 00:56:55.008015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.209 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.008213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.008236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.008497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.008518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.008723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.008742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.009647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.009678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.009882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.009903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.010875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.011141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.011160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.011365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.011385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.011501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.011521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.011719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.011751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.011910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.011942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.012152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.012182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.012390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.012422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.012649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.012681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.012815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.012834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.012949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.012969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.013208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.013228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.013376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.013407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.013540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.013571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.013701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.013733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.013887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.013918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.014059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.014090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.014309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.014341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.484 qpair failed and we were unable to recover it. 00:30:37.484 [2024-07-16 00:56:55.014477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.484 [2024-07-16 00:56:55.014497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.014603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.014621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.014800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.014823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.014952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.014971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.015168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.015199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.015426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.015458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.015591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.015623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.015849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.015880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.016192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.016223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.016496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.016516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.016626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.016657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.016791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.016822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.016949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.016979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.017136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.017167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.017309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.017328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.017579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.017598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.017800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.017831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.017972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.018003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.018150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.018180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.018336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.018367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.018651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.018683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.018909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.018940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.019824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.019844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.020061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.020092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.020369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.020401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.020644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.020675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.020812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.020831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.020941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.020960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.021131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.021150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.021281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.021313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.021515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.021545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.021687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.021718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.021934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.021964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.022119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.022149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.022366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.022399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.485 qpair failed and we were unable to recover it. 00:30:37.485 [2024-07-16 00:56:55.022583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.485 [2024-07-16 00:56:55.022613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.022769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.022799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.022996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.023018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.023227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.023246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.023528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.023559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.023778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.023809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.023948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.023984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.024107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.024126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.024364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.024395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.024658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.024688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.024906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.024925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.025910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.025929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.026121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.026141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.026323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.026343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.026518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.026537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.026668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.026699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.026836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.026867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.027108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.027138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.027283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.027303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.027429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.027647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.027677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.027886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.027917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.028080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.028100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.028293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.028324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.028466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.028497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.028736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.028755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.028889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.028908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.029207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.029238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.029510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.029541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.029684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.029715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.029911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.029941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.030145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.030176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.030380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.030400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.030593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.030612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.030788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.030818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.031090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.486 [2024-07-16 00:56:55.031121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.486 qpair failed and we were unable to recover it. 00:30:37.486 [2024-07-16 00:56:55.031444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.031475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.031757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.031794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.031925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.031956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.032264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.032297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.032499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.032529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.032688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.032719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.032920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.032939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.033205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.033246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.033487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.033517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.033714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.033745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.033888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.033919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.034126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.034156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.034390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.034422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.034573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.034603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.034907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.034938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.035142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.035174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.035373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.035404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.035603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.035634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.035844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.035875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.036019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.036049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.036181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.036201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.036449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.036468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.036738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.036779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.036923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.036954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.037136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.037167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.037382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.037413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.037603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.037634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.037848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.037879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.038139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.038170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.038380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.038400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.038607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.038626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.038875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.038906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.039060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.039090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.039232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.039273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.039483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.039513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.039805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.039836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.039977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.040007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.040280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.040300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.040478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.040497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.040628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.040647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.487 [2024-07-16 00:56:55.040757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.487 [2024-07-16 00:56:55.040777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.487 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.040898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.040934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.041235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.041273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.041492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.041523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.041747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.041778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.042082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.042113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.042252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.042280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.042408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.042446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.042671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.042702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.042836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.042867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.043062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.043092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.043291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.043323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.043619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.043650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.043873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.043904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.044133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.044164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.044370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.044390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.044665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.044696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.045005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.045035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.045225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.045273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.045426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.045458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.045618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.045650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.045886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.045917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.046205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.046236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.046542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.046573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.046845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.046875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.047116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.047135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.047354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.047374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.047553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.047572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.047700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.047719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.047925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.047943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.048129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.048148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.048424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.048443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.048632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.048651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.048830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.048850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.488 qpair failed and we were unable to recover it. 00:30:37.488 [2024-07-16 00:56:55.049023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.488 [2024-07-16 00:56:55.049042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.049307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.049327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.049547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.049567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.049746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.049970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.049988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.050169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.050188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.050342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.050361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.050683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.050708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.050827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.050846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.050955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.050973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.051970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.051989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.052116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.052135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.052382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.052402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.052610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.052640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.052805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.052835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.053050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.053080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.053284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.053303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.053520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.053540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.053725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.053745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.053932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.054231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.054278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.054475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.054494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.054624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.054643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.054820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.054840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.055032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.055063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.055285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.055317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.055615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.055647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.055877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.055908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.056153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.056184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.056408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.056428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.056727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.056747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.056927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.056946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.057114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.057133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.057268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.057287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.057422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.057441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.057630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.057661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.489 qpair failed and we were unable to recover it. 00:30:37.489 [2024-07-16 00:56:55.057891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.489 [2024-07-16 00:56:55.057922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.058055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.058074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.058195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.058214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.058396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.058415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.058633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.058665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.058939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.058969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.059202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.059224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.059330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.059349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.059472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.059491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.059691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.059710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.059913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.059932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.060143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.060277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.060541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.060682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.060838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.060976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.061006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.061151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.061181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.061398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.061418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.061685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.061720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.061974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.062005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.062273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.062304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.062467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.062498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.062694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.062726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.062992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.063116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.063377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.063519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.063645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.063961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.063992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.064235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.064273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.064543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.064574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.064876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.064907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.065065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.065084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.065205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.065224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.065454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.065486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.065731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.065762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.066027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.066063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.066207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.066237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.066403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.066435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.066547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.490 [2024-07-16 00:56:55.066578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.490 qpair failed and we were unable to recover it. 00:30:37.490 [2024-07-16 00:56:55.066741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.066772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.066981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.067012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.067301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.067332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.067459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.067488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.067687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.067717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.067948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.067983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.068192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.068211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.068345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.068364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.068539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.068559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.068735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.068754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.068868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.068898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.069095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.069126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.069355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.069386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.069589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.069619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.069787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.069817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.070032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.070062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.070273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.070306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.070524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.070555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.070787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.070817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.070952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.070971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.071217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.071248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.071484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.071515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.071743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.071773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.071986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.072004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.072207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.072237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.072470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.072501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.072796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.072827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.073098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.073117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.073400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.073420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.073595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.073615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.073809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.073840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.074030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.074061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.074295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.074328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.074559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.074589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.074886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.075135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.075166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.075391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.075659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.075690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.075908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.075939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.076213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.076244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.076605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.491 [2024-07-16 00:56:55.076636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.491 qpair failed and we were unable to recover it. 00:30:37.491 [2024-07-16 00:56:55.076841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.076871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.077069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.077099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.077304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.077324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.077613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.077644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.077778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.077814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.078017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.078047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.078345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.078377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.078672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.078703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.078951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.078982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.079202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.079233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.079406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.079437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.079649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.079679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.079973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.080003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.080296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.080327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.080507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.080538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.080753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.080783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.081100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.081131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.081346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.081377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.081626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.081658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.081792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.081823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.081960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.081990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.082226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.082267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.082464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.082494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.082704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.082735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.083002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.083032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.083225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.083244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.083444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.083476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.083594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.083624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.083820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.083851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.084059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.084078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.084266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.084285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.084426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.084605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.492 [2024-07-16 00:56:55.084636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.492 qpair failed and we were unable to recover it. 00:30:37.492 [2024-07-16 00:56:55.084850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.084880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.085090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.085121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.085271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.085302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.085571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.085601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.085765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.085796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.085939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.085968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.086209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.086240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.086472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.086503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.086771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.086802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.087003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.087034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.087164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.087195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.087396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.087419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.087694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.087725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.088046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.088077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.088327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.088347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.088531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.088550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.088824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.088855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.089069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.089100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.089245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.089293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.089502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.089521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.089706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.089725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.089853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.089873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.090044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.090063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.090238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.090263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.090535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.090555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.090641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.090659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.090849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.090880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.091179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.091210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.091525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.091545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.091735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.091754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.091935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.091966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.092187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.092217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.092429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.092461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.092671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.092701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.092827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.092858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.092997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.093028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.093227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.093268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.093480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.093499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.093638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.093657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.093842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.093861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.493 [2024-07-16 00:56:55.094137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.493 [2024-07-16 00:56:55.094168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.493 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.094395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.094427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.094574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.094605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.094735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.094766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.095064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.095094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.095285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.095305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.095493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.095524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.095712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.095743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.095945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.095975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.096300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.096337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.096634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.096664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.096863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.096902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.097198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.097229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.097533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.097564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.097728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.097759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.097988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.098019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.098311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.098331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.098518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.098537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.098802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.098821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.098939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.098981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.099205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.099235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.099501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.099532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.099752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.099783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.100002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.100033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.100251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.100291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.100444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.100475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.100748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.100778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.101070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.101100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.101330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.101349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.101470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.101489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.101666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.101686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.101933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.102196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.102226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.102462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.102493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.102729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.102748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.102997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.103015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.103207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.103226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.103411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.103443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.103690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.103761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.104085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.104118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.104356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.494 [2024-07-16 00:56:55.104391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:37.494 qpair failed and we were unable to recover it. 00:30:37.494 [2024-07-16 00:56:55.104611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.104632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.104879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.104899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.105000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.105019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.105271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.105519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.105549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.105695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.105725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.105966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.105997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.106161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.106191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.106378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.106410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.106705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.106736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.106973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.107004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.107155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.107175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.107367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.107386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.107660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.107691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.107915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.107947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.108089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.108120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.108416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.108613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.108642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.108851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.108881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.109080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.109099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.109289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.109309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.109559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.109577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.109762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.109782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.109972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.109991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.110114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.110145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.110312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.110344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.110612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.110643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.110883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.110913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.111051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.111081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.111299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.111331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.111622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.111653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.111949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.111979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.112203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.112234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.112517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.112548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.112710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.112740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.112957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.112987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.113282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.113314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.113608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.113643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.113787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.113818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.114114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.114133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.114352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.495 [2024-07-16 00:56:55.114384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.495 qpair failed and we were unable to recover it. 00:30:37.495 [2024-07-16 00:56:55.114537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.114568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.114842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.114872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.115173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.115203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.115450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.115481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.115691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.115722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.115922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.115952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.116197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.116227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.116462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.116493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.116691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.116721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.116918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.116949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.117181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.117213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.117439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.117471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.117740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.117771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.117940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.117971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.118157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.118188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.118503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.118534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.118764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.118795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.119045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.119336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.119525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.119770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.119994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.120025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.120244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.120284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.120562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.120593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.120801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.120831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.120974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.121228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.121247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.121453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.121484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.121610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.121641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.121863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.121893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.122106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.122137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.122334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.122367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.122604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.122634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.122909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.122940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.123073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.123103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.123388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.123438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.123629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.123649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.123890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.123909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.124114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.124133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.496 [2024-07-16 00:56:55.124401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.496 [2024-07-16 00:56:55.124420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.496 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.124667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.124686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.124977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.125008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.125222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.125262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.125537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.125568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.125864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.125894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.126132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.126163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.126491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.126512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.126802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.126833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.127130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.127161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.127332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.127351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.127595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.127614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.127750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.127770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.128044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.128063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.128244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.128271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.128488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.128507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.128767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.128797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.129062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.129081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.129272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.129291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.129552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.129572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.129774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.129804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.130035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.130065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.130203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.130234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.130539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.130573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.130730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.130761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.130963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.130994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.131291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.131323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.131486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.131504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.131716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.131736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.131997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.132039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.132190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.132220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.132382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.132414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.132701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.132731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.133000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.133031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.133235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.133274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.133388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.497 [2024-07-16 00:56:55.133418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.497 qpair failed and we were unable to recover it. 00:30:37.497 [2024-07-16 00:56:55.133692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.133728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.133923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.133954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.134174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.134205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.134374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.134405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.134689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.134708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.134888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.134908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.134994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.135012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.135200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.135219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.135413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.135445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.135734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.135765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.135893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.135923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.136182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.136415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.136660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.136800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.136983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.137013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.137167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.137198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.137408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.137439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.137718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.137787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.138122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.138155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.138397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.138680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.138712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.138958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.139245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.139289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.139501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.139532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.139807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.139838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.140050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.140080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.140299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.140331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.140534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.140565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.140795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.140825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.141040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.141071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.141283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.141314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.141529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.141560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd3c000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.141684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.141707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.141904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.141934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.142227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.142266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.142424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.142468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.142623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.142642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.142817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.142836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.498 [2024-07-16 00:56:55.143020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.498 [2024-07-16 00:56:55.143055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.498 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.143330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.143361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.143563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.143594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.143879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.143909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.144190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.144221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.144526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.144556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.144759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.144789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.144988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.145019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.145170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.145510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.145541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.145694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.146031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.146062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.146271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.146302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.146439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.146459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.146628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.146647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.146845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.146876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.147006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.147037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.147283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.147303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.147418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.147436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.147568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.147587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.147864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.147895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.148063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.148093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.148304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.148335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.148541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.148572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.148788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.148818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.148965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.148995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.149225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.149265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.149481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.149511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.149745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.149776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.149910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.149940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.150158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.150189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.150457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.150489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.150683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.150702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.150909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.150940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.151141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.151172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.151333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.151365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.151493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.151703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.151722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.151968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.151998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.152165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.152195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.152415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.152472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.499 [2024-07-16 00:56:55.152771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.499 [2024-07-16 00:56:55.152790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.499 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.153008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.153027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.153205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.153224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.153397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.153429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.153644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.153675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.153876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.153906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.154055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.154085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.154231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.154270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.154567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.154598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.154809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.154839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.155108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.155138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.155403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.155560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.155579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.155840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.155870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.156168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.156199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.156357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.156376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.156595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.156626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.156840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.156871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.157080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.157111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.157383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.157414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.157682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.157712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.157846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.157877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.158077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.158108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.158327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.158358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.158678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.158708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.158919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.158950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.159270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.159302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.159528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.159558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.159684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.159715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.159848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.159879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.160086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.160116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.160245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.160286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.160581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.160612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.160825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.160856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.161128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.161169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.161296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.161316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.161504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.161535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.161677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.161708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.161979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.162020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.162142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.162164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.162275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.500 [2024-07-16 00:56:55.162296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.500 qpair failed and we were unable to recover it. 00:30:37.500 [2024-07-16 00:56:55.162539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.162558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.162745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.162764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.162900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.162919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.163063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.163081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.163270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.163289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.163401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.163419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.163705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.163736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.164022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.164053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.164243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.164284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.164416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.164455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.164727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.164747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.164911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.164930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.165046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.165065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.165168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.165188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.165369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.165389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.165577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.165608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.165806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.165837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.166111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.166150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.166356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.166376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.166498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.166517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.166637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.166656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.166851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.166870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.167049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.167069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.167318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.167338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.167615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.167634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.167754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.167774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.168061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.168091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.168309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.168340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.168550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.168580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.168852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.168883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.169177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.169207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.169357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.169389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.169521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.169540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.169731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.169751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.170004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.501 [2024-07-16 00:56:55.170023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.501 qpair failed and we were unable to recover it. 00:30:37.501 [2024-07-16 00:56:55.170195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.170215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.170483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.170502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.170679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.170708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.170936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.170972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.171298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.171338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.171581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.171601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.171912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.171943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.172091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.172123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.172332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.172364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.172584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.172614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.172831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.172861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.173082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.173112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.173359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.173391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.173550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.173569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.173728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.173758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.174047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.174078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.174227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.174282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.174513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.174544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.174770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.174801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.175011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.175042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.175172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.175192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.175437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.175458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.175655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.175673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.175857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.175876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.176067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.176086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.176194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.176213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.176398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.176430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.176712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.176742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.177020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.177050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.177212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.177242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.177485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.177517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.177754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.177785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.177944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.177974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.178177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.178208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.178416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.178447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.178653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.178673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.178865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.178885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.178991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.179010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.179185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.179204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.179400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.502 [2024-07-16 00:56:55.179419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.502 qpair failed and we were unable to recover it. 00:30:37.502 [2024-07-16 00:56:55.179633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.179653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.179908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.179938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.180250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.180291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.180611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.180647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.180823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.180853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.181177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.181208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.181492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.181523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.181753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.181784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.182059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.182306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.182326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.182523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.182542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.182665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.182684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.182860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.182890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.183186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.183218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.183453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.183472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.183745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.183781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.183937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.183967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.184149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.184181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.184374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.184406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.184702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.184732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.185960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.185979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.186277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.186296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.186439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.186459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.186677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.186696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.186804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.186822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.187071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.187101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.187332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.187364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.187567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.187586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.187789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.187808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.188074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.188093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.188288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.188308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.188558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.188577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.188752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.188772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.188947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.188967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3224708 Killed "${NVMF_APP[@]}" "$@" 00:30:37.503 [2024-07-16 00:56:55.189212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.189231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.503 qpair failed and we were unable to recover it. 00:30:37.503 [2024-07-16 00:56:55.189432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.503 [2024-07-16 00:56:55.189451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.189616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.189635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:37.504 [2024-07-16 00:56:55.189881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.189901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:37.504 [2024-07-16 00:56:55.190082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.190102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.190237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.190261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:37.504 [2024-07-16 00:56:55.190418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.190438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.190585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.190604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:37.504 [2024-07-16 00:56:55.190707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.190725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.504 [2024-07-16 00:56:55.190992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.191012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.191233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.191251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.191497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.191516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.191703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.191722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.192011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.192031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.192295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.192317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.192534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.192557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.192778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.192798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.192916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.192935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.193129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.193148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.193436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.193456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.193596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.193615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.193743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.193762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.194745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.194765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.195902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.195920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.196164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.196182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.196453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.196473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.196734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.196753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.196859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.196878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.197074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.197093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 [2024-07-16 00:56:55.197208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.504 [2024-07-16 00:56:55.197227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.504 qpair failed and we were unable to recover it. 00:30:37.504 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3225532 00:30:37.504 [2024-07-16 00:56:55.197489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.197509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.197681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.197701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3225532 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:37.505 [2024-07-16 00:56:55.197882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.197902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.198022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.198041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3225532 ']' 00:30:37.505 [2024-07-16 00:56:55.198294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.198314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.505 [2024-07-16 00:56:55.198447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.198467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.198644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.505 [2024-07-16 00:56:55.198838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.198857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.505 [2024-07-16 00:56:55.199044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.199064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.505 [2024-07-16 00:56:55.199274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.199295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.199515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.199534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b9 00:56:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:37.505 0 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.199737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.199756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.199950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.199969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.200215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.200234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.200489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.200509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.200694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.200713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.200835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.200854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.201910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.201929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.202142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.202162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.202377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.202402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.202516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.202534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.202743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.202760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.202885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.202903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.203098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.203115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.203300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.203319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.203443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.203462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.505 [2024-07-16 00:56:55.203601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.505 [2024-07-16 00:56:55.203620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.505 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.203875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.203895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.204160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.204295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.204315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.204498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.204517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.204710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.204729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.204972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.204991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.205179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.205198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.205384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.205403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.205708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.205727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.205896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.205916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.206135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.206155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.206291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.206311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.206586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.206604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.206830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.206849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.206961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.206980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.207110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.207129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.207319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.207338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.207547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.207567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.207748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.207768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.207996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.208015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.208200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.208220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.208498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.208517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.208709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.208728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.208911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.208930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.209114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.209133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.209266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.209286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.209480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.209501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.209683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.209702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.209817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.209837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.210040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.210058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.210237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.210264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.210372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.210391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.210570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.210592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.210866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.211083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.211102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.211225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.211244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.211432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.211451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.211679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.211697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.211829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.211848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.212027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.212047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.506 qpair failed and we were unable to recover it. 00:30:37.506 [2024-07-16 00:56:55.212292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.506 [2024-07-16 00:56:55.212311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.212551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.212571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.212767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.212786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.212979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.212998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.213178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.213198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.213391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.213411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.213586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.213606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.213709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.213728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.213916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.213935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.214891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.214910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.215184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.215204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.215427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.215446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.215621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.215641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.215756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.215775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.215911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.215931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.216118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.216137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.216322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.216342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.216538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.216558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.216690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.216710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.216972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.216991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.217178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.217197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.217315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.217334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.217519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.217538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.217722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.217742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.217855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.217875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.218041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.218060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.218234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.218260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.218397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.218420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.218616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.218635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.218827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.218847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.219024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.219044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.219229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.219248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.219571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.219589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.219705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.219725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.219846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.219866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.507 [2024-07-16 00:56:55.220110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.507 [2024-07-16 00:56:55.220129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.507 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.220304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.220324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.220446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.220465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.220642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.220661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.220845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.220864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.221043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.221062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.221271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.221291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.221458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.221477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.221663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.221682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.221996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.222140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.222400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.222529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.222693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.222900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.222919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.223110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.223129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.223325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.223345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.223563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.223582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.223760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.223779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.223902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.223921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.224973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.224991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.225197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.225216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.225451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.225470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.225663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.225955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.225974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.226165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.226185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.226365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.226581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.227005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.227030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.227350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.227370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.227562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.227581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.227790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.227809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.228001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.228020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.228224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.228243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.228498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.508 [2024-07-16 00:56:55.228518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.508 qpair failed and we were unable to recover it. 00:30:37.508 [2024-07-16 00:56:55.228723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.228742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.228863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.228881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.229116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.229135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.229318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.229338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.229524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.229544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.229673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.229692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.229884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.229903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.230073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.230092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.230356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.230375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.230645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.230665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.230879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.230898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.231964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.231983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.232168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.232187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.232437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.232457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.232649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.232667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.232901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.232920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.233940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.233959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.234917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.234936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.235957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.235976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.509 [2024-07-16 00:56:55.236101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.509 [2024-07-16 00:56:55.236121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.509 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.236244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.236270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.236477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.236496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.236606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.236625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.236740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.236760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.236880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.236900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.237974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.237993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.238199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.238218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.238473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.238493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.238683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.238703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.238854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.238873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.239117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.239136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.239408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.239428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.239670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.239689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.239812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.239832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.240833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.240852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.241139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.241158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.241395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.241415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.241598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.241616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.241729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.241748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.241895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.241914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.242093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.242113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.242409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.242437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.242563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.242582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.242701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.242720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.242911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.242930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.243065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.243085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.510 qpair failed and we were unable to recover it. 00:30:37.510 [2024-07-16 00:56:55.243268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.510 [2024-07-16 00:56:55.243289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.243463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.243482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.243670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.243689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.243877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.243896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.244960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.244979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.245092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.245110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.245292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.245311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.245483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.245502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.245696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.245715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.245891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.245911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.246106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.246125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.246332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.246352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.246527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.246547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.246789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.246807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.246983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.247002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.247328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.247348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.247563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.247582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.247695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.247715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.247931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.247949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.248085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.248104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.248282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.248302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.248544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.248563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.248682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.248701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.248875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.248894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.249177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.249196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.249307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.249326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.249511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.249530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.249653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.249672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.249797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.249816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b9[2024-07-16 00:56:55.250287] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:37.511 0 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250339] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.511 [2024-07-16 00:56:55.250436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.250871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.511 [2024-07-16 00:56:55.250888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.511 qpair failed and we were unable to recover it. 00:30:37.511 [2024-07-16 00:56:55.251120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.251137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.251329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.251348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.251501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.251520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.251629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.251648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.251827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.252977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.252996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.253166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.253185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.253290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.253309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.253489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.253508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.253684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.253703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.253830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.253849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.254903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.254922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.255166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.255185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.255363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.255384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.255594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.255613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.255876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.255894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.256085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.256104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.256229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.256248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.256485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.256504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.256677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.256696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.256894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.256912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.257035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.257054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.257183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.257202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.257391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.257414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.257623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.257642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.257828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.257847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.258023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.258042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.258162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.258180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.258475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.258494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.512 [2024-07-16 00:56:55.258755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.512 [2024-07-16 00:56:55.258773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.512 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.259977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.259996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.260240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.260265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.260545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.260564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.260666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.260685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.260984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.261003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.261209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.261227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.261423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.261443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.261687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.261705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.261919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.261938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.262197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.262216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.262379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.262399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.262508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.262527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.262702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.262721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.262826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.262845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.263050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.263274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.263490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.263682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.263998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.264221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.264383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.264591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.264787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.264929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.264948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.265878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.265896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.266109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.266129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.266307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.266327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.266603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.266622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.266751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.266770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.513 [2024-07-16 00:56:55.267016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.513 [2024-07-16 00:56:55.267036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.513 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.267227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.267471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.267491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.267601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.267618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.267874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.267893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.268024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.268043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.268233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.268253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.268465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.268484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.268611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.268630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.268834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.268853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.269026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.269171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.269191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.269435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.269455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.269628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.269648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.269942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.269962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.270149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.270168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.270405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.270425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.270536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.270555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.270735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.270754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.270975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.270994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.271172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.271191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.271389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.271408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.271612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.271631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.271815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.271834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.271938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.271957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.272086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.272105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.272378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.272398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.272583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.272602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.272712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.272731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.272864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.272883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.273055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.273074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.273252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.273279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.273464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.273483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.273687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.273710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.273928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.273947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.274154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.514 [2024-07-16 00:56:55.274173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.514 qpair failed and we were unable to recover it. 00:30:37.514 [2024-07-16 00:56:55.274365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.274385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.274532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.274551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.274824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.274843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.275040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.275059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.275239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.275287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.275467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.275487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.275697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.275716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.275837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.275856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.276037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.276057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.276234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.276260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.276443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.276462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.276718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.276737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.276847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.276866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.277020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.277038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.277317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.277336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.277618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.277637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.277910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.277929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.278171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.278190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.278412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.278624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.278643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.278890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.278908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.279131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.279150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.279418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.279438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.279627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.279647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.279850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.279870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.280085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.280103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.280228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.280246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.280380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.280400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.280687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.280706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.280826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.280845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.281051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.281070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.281263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.281283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.281553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.281572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.281782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.281801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.281997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.282148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.282415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.282609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.282740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.282953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.282972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.515 [2024-07-16 00:56:55.283099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.515 [2024-07-16 00:56:55.283118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.515 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.283225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.283243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.283436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.283455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.283629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.283648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.283833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.284931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.284950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.285146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.285166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.285414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.285434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.285628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.285646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.285783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.285803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.285980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.285999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.286184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.286203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.286295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.286314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.286472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.286491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.286667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.286686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.286906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.286926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.287113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.287132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.287283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.287304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.287436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.287456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.287566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.287585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.287834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.287853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.288034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.288053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.288168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.288187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.288391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.288410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.516 [2024-07-16 00:56:55.288594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.288614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.288811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.288830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.289127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.289147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.289323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.289342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.289528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.289547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.289815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.289834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.289969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.289987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.290102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.290121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.290329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.290349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.290538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.290558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.290748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.290767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.290952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.516 [2024-07-16 00:56:55.290971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.516 qpair failed and we were unable to recover it. 00:30:37.516 [2024-07-16 00:56:55.291143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.291163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.291272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.291292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.291522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.291541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.291649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.291668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.291846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.291865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.292057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.292076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.292201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.292220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.292349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.292370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.292549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.292568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.292744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.292766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.293978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.293997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.294241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.294269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.294488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.294507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.294769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.294789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.294968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.294987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.295197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.295216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.295406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.295426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.295548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.295567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.295685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.295705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.295880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.295899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.296087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.296106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.296303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.296323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.296576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.296595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.296713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.296732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.296906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.296925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.297111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.297131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.297323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.297343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.297613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.297631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.297843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.297862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.298106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.298125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.298343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.298363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.298612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.298631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.298750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.298769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.299008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.299027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.299151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.517 [2024-07-16 00:56:55.299352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.517 [2024-07-16 00:56:55.299372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.517 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.299567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.299586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.299803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.299822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.300025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.300045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.300267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.300286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.300457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.300478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.300719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.300738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.300850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.300869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.301078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.301097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.301301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.301323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.301529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.301548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.301682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.301702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.301965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.301984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.302282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.302302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.302412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.302431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.302604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.302623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.302823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.302842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.303149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.303167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.303355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.303375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.303568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.303587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.303776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.303795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.303938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.303958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.304070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.304090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.304421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.304441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.304655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.304674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.304806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.304825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.305019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.305038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.305157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.305175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.305382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.305401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.305658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.305678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.305871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.305889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.306134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.306331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.306350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.306495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.306514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.518 [2024-07-16 00:56:55.306675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.518 [2024-07-16 00:56:55.306694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.518 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.306956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.306977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.307222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.307245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.307444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.307463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.307634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.307653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.307832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.307851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.308964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.308984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.309169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.309188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.309329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.309348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.309551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.309571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.309774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.309794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.309979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.309998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.310205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.310225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.310350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.310369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.310555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.310575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.310794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.792 [2024-07-16 00:56:55.310813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.792 qpair failed and we were unable to recover it. 00:30:37.792 [2024-07-16 00:56:55.310986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.311005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.311250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.311276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.311481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.311500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.311741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.311760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.311969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.311988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.312264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.312284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.312464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.312483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.312672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.312691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.312902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.312922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.313142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.313161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.313413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.313432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.313743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.313761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.313952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.313972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.314244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.314269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.314490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.314702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.314721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.314929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.314949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.315148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.315167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.315439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.315459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.315765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.315784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.316029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.316048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.316210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.316233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.316375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.316395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.316690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.316710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.316901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.316920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.317035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.317054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.317242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.317271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.317538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.317557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.317733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.317753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.317863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.317883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.318001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.318021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.318289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.318309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.318494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.318514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.318762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.318781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.318892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.318912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.319106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.319126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.319391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.319410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.319538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.319556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.319696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.319716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.319862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.319881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.793 [2024-07-16 00:56:55.320114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.793 [2024-07-16 00:56:55.320133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.793 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.320346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.320366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.320576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.320596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.320726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.320746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.320898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.320918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.321097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.321117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.321233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.321252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.321452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.321472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.321664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.321684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.321946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.321965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.322077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.322096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.322206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.322224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.322354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.322374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.322617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.322636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.322912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.322931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.323061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.323080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.323325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.323345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.323617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.323636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.323919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.323939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.324186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.324205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.324389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.324409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.324602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.324628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.324830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.324849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.324962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.324980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.325172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.325191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.325365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.325574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.325593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.325724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.325742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.325926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.325945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.326147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.326166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.326292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.326312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.326420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.326439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.326633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.326653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.326849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.326868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.327052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.327071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.327365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.327385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.327525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.327544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.327722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.327742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.327918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.327937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.328143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.328162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.794 [2024-07-16 00:56:55.328374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.794 [2024-07-16 00:56:55.328394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.794 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.328555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.328573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.328750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.328769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.329021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.329158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.329367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.329512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.329709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.329984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.330192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.330329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.330457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.330604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.330871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.330890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.331071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.331090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.331286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.331306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.331493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.331512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.331696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.331715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.331919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.331938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.332926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.332945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.333145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.333164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.333343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.333363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.333548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.333568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.333751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.333771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.334943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.335182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.335202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.335335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.335355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.335569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.335589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.335713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.335733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.795 [2024-07-16 00:56:55.335907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.795 [2024-07-16 00:56:55.335926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.795 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.336042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.336062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.336267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.336286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.336485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.336505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.336752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.336772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.336943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.336963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.337082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.337101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.337348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.337368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.337476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.337494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.337675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.337694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.337935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.337954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.338133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.338331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.338350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.338624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.338643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.338845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.338864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.339037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.339247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.339275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.339478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.339498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.339690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.339709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.339955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.339975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.340089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.340108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.340305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.340325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.340587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.340610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.340788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.340807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.341822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.341841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.342037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.342056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.342186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.342206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.342449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.342469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.342589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.796 [2024-07-16 00:56:55.342608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.796 qpair failed and we were unable to recover it. 00:30:37.796 [2024-07-16 00:56:55.342881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.342901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.343088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.343107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.343246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.343273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.343382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.343401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.343611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.343631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.343833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.343853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.344124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.344143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.344355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.344375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.344519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.344538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.344746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.344765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.344982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.345110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.345243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.345556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.345761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.345971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.345991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.346169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.346188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.346380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.346400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.346492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.346512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.346655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.346675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.346919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.346939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.347098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.347118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.347309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.347601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.347796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.347816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.348939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.348959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.349137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.349157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.349345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.349494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.349514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.349784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.349804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.349974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.349994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.350100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.350120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.350295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.350315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.350428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.350447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.350641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.797 [2024-07-16 00:56:55.350661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.797 qpair failed and we were unable to recover it. 00:30:37.797 [2024-07-16 00:56:55.350803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.350823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.351066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.351086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.351280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.351300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.351427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.351446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.351621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.351641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.351824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.351844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.352050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.352071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.352344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.352365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.352571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.352590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.352771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.352792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.352912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.352932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.353077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.353096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.353228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.353249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.353419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.353439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.353703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.353900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.353919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.354948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.354968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.355958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.355978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.356213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.356236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.356436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.356456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.356579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.356598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.356848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.356867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.357973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.357993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.358267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.358286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.358480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.358499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.798 [2024-07-16 00:56:55.358775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.798 [2024-07-16 00:56:55.358795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.798 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.358919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.358938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.359145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.359163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.359432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.359453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.359583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.359601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.359743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.359762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.359897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.359917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.360029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.360049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.360209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.360228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.360387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.360407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.360561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.360581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.360828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.360847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.361035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.361054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.361173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.361192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.361437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.361457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.361676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.361697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.361889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.361908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.362100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.362120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.362390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.362410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.362593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.362614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.362747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.362766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.362947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.362968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.363062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.363082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.363277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.363297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.363479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.363500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.363683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.363703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.363949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.363969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.364153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.364173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.364349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.364375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.364514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.364533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.364709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.364728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.364909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.364929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.365072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.365091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.365303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.365323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.365515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.365535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.365716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.365980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.365999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.366141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.366160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.366293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.366313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.366420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.366440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.366626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.366646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.799 qpair failed and we were unable to recover it. 00:30:37.799 [2024-07-16 00:56:55.366891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.799 [2024-07-16 00:56:55.366911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.367949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.368166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.368185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.368293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.368313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.368507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.368527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.368716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.368737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.368923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.368943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.369245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.369272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.369381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.369400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.369700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.369721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.369893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.369912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.370157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.370177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.370379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.370399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.370669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.370688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.370880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.370900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.371931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.371950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.372155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.372354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.372496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.372693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.372925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.372955] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:37.800 [2024-07-16 00:56:55.373168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.373188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.373461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.373481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.373683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.373702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.373803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.373822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.373921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.373941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.374189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.374208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.374330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.374350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.374595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.374616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.374872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.374892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.375077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.800 [2024-07-16 00:56:55.375100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.800 qpair failed and we were unable to recover it. 00:30:37.800 [2024-07-16 00:56:55.375340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.375360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.375484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.375504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.375691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.375710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.375841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.375860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.375968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.375988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.376163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.376181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.376319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.376339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.376532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.376551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.376709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.376728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.376850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.376870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.377087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.377107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.377364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.377385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.377516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.377536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.377732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.377752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.377996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.378016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.378149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.378169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.378377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.378398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.378577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.378596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.378839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.378858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.378982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.379001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.379203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.379224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.379591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.379611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.379773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.379793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.380040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.380060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.380383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.380403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.380594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.380615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.380729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.380961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.380981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.381110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.381129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.381350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.381370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.381563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.381583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.381754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.381774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.382016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.382035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.382234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.382263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.382410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.382429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.801 qpair failed and we were unable to recover it. 00:30:37.801 [2024-07-16 00:56:55.382554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.801 [2024-07-16 00:56:55.382573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.382751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.382771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.382951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.382971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.383214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.383234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.383367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.383391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.383662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.383953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.383973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.384182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.384202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.384387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.384409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.384587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.384606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.384728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.384748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.384951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.384971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.385155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.385176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.385421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.385442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.385545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.385565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.385737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.385757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.385935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.385955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.386148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.386169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.386347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.386368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.386640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.386659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.386832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.386852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.387027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.387047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.387218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.387238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.387450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.387469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.387740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.387759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.388029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.388049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.388229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.388248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.388397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.388417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.388674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.388693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.388913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.388933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.389112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.389132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.389261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.389281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.389473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.389493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.389611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.389631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.389936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.389956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.390086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.390106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.390279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.390299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.390507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.390527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.390742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.390762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.390983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.391002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.391138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.802 [2024-07-16 00:56:55.391158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.802 qpair failed and we were unable to recover it. 00:30:37.802 [2024-07-16 00:56:55.391354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.391374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.391570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.391589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.391832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.391852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.392043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.392066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.392345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.392365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.392551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.392571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.392843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.392863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.393055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.393074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.393197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.393217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.393468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.393488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.393665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.393683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.393869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.393888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.394003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.394023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.394273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.394293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.394414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.394432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.394610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.394631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.394765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.394785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.395051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.395071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.395245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.395272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.395381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.395401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.395645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.395665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.395843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.395863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.396062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.396082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.396304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.396324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.396439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.396459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.396590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.396609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.396879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.396899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.397119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.397138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.397332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.397351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.397527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.397547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.397763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.397783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.397966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.397986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.398110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.398129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.398262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.398283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.398406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.398425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.398603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.398623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.398909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.398928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.399123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.399142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.399364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.399384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.399569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.399589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.803 [2024-07-16 00:56:55.399796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.803 [2024-07-16 00:56:55.399816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.803 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.399944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.399963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.400178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.400198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.400399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.400422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.400557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.400577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.400762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.400781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.400960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.400979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.401171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.401191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.401316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.401336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.401584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.401603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.401784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.401803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.401982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.402126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.402333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.402547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.402683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.402889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.403019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.403038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.403316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.403337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.403459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.403478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.403607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.403627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.403830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.403850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.404116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.404135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.404342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.404361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.404559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.404578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.404694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.404714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.404967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.404986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.405229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.405248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.405453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.405472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.405596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.405615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.405797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.405823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.406006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.406026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.406275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.406296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.406404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.406423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.406693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.406712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.406926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.406945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.407189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.407208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.407399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.407419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.407595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.407615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.407808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.407827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.408002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.408022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.804 qpair failed and we were unable to recover it. 00:30:37.804 [2024-07-16 00:56:55.408215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.804 [2024-07-16 00:56:55.408235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.408414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.408434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.408551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.408571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.408696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.408715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.408894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.408914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.409017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.409038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.409288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.409308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.409517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.409537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.409710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.409728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.409900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.409919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.410950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.410969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.411095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.411319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.411444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.411581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.411995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.412146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.412279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.412542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.412667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.412881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.412902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.413143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.413162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.413366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.413386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.413571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.413593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.413766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.413787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.413965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.413985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.414099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.414118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.414321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.414342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.414531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.414550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.805 [2024-07-16 00:56:55.414722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.805 [2024-07-16 00:56:55.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.805 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.414858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.414878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.415247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.415720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.415898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.415996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.416139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.416344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.416567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.416774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.416916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.416936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.417122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.417142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.417325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.417344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.417470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.417491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.417695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.417715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.417905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.417926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.418935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.418955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.419166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.419295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.419415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.419616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.419742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.419985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.420124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.420343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.420539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.420736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.420940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.420962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.421241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.421279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.421529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.421549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.421726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.421745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.421881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.421900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.422146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.422165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.422306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.806 [2024-07-16 00:56:55.422326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.806 qpair failed and we were unable to recover it. 00:30:37.806 [2024-07-16 00:56:55.422477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.422496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.422739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.422758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.422931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.423152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.423305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.423326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.423567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.423586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.423710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.423730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.423936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.423956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.424146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.424165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.424341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.424361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.424605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.424625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.424730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.424927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.424947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.425208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.425227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.425442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.425462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.425639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.425659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.425748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.425767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.425990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.426197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.426335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.426605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.426732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.426945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.426965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.427154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.427173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.427470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.427489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.427592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.427610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.427809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.427828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.427959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.427978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.428249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.428277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.428575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.428594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.428786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.428977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.428996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.429180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.429199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.429320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.429344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.429540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.429559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.429831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.429850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.430121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.430141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.430262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.430282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.430531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.430550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.430808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.430827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.807 [2024-07-16 00:56:55.431006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.807 [2024-07-16 00:56:55.431025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.807 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.431201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.431221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.431409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.431429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.431695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.431714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.431895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.431914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.432119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.432138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.432381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.432401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.432525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.432545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.432788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.432807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.433103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.433122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.433262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.433282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.433527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.433546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.433721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.433740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.433950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.433969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.434080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.434098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.434289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.434309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.434491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.434511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.434716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.434734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.434907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.434927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.435143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.435162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.435309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.435329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.435571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.435590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.435849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.435867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.436193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.436212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.436479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.436498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.436694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.436714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.436892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.436912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.437015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.437034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.437209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.437228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.437481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.437501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.437778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.437796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.437981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.438195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.438436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.438591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.438801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.438930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.438949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.439121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.439140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.439273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.439293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.439557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.439576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.439784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.808 [2024-07-16 00:56:55.439804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.808 qpair failed and we were unable to recover it. 00:30:37.808 [2024-07-16 00:56:55.439987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.440007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.440113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.440131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.440314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.440335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.440586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.440605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.440781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.440801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.441078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.441098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.441288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.441307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.441499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.441519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.441637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.441656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.441850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.441870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.442081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.442103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.442300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.442590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.442611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.442789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.442809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.443029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.443049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.443265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.443286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.443491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.443511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.443690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.443710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.443946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.443967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.444094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.444114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.444317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.444338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.444614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.444634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.444853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.444873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.444981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.445001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.445298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.445318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.445566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.445586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.445764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.445783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.445975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.445994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.446176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.446195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.446305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.446324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.446427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.446447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.446717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.446736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.446857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.446883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.447084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.447104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.447234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.809 [2024-07-16 00:56:55.447384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.809 [2024-07-16 00:56:55.447404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.809 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.447687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.447707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.447927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.447946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.448188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.448208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.448327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.448347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.448602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.448621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.448794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.448814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.448990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.449010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.449283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.449304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.449423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.449443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.449684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.449705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.449985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.450005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.450246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.450274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.450550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.450569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.450832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.450853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.451114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.451134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.451320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.451340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.451458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.451478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.451753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.451773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.452068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.452088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.452280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.452301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.452544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.452563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.452768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.452787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.453087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.453109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.453294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.453315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.453521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.453540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.453788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.453808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.454002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.454022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.454215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.454234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.454519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.454540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.454727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.454747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.455038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.455057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.455251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.455277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.455416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.455435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.455623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.455642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.455845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.455863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.456064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.456082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.456215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.456240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.456380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.456400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.456644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.456664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.456906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.810 [2024-07-16 00:56:55.456926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.810 qpair failed and we were unable to recover it. 00:30:37.810 [2024-07-16 00:56:55.457187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.457206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.457442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.457461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.457585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.457604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.457791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.457811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.458021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.458040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.458285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.458305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.458482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.458501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.458731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.458750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.459052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.459072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.459293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.459313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.459493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.459512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.459780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.459799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.459994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.460014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.460198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.460217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.460417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.460436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.460711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.460730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.460953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.460972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.461223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.461242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.461432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.461452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.461574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.461593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.461775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.461793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.462017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.462036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.462279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.462299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.462575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.462596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.462699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.462719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.462980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.463000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.463317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.463337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.463654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.463673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.463811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.463830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.464099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.464118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.464424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.464444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.464742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.464761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.465007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.465027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.465292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.465312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.465572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.465591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.465777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.465797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.466016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.466038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.466246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.466274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.466562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.466581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.466878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.466897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.811 qpair failed and we were unable to recover it. 00:30:37.811 [2024-07-16 00:56:55.467079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.811 [2024-07-16 00:56:55.467098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.467307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.467326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.467567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.467586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.467761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.467780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.467970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.467990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.468233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.468252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.468439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.468458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.468698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.468718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.468907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.468925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.469205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.469224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.469490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.469510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.469803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.469822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.470096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.470115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.470441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.470461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.470708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.470727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.470923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.470942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.471235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.471262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.471524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.471544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.471909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.472181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.472200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.472392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.472411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.472625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.472644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.472917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.472936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.473234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.473253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.473573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.473592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.473892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.474095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.474114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.474406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.474426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.474691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.474710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.474886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.474906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.475098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.475117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.475295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.475314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.475457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.475476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.475655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.475675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.475946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.475965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.476101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.476120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.476368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.476392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.476606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.476626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.476870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.476889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.477151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.477171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.477355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.812 [2024-07-16 00:56:55.477374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.812 qpair failed and we were unable to recover it. 00:30:37.812 [2024-07-16 00:56:55.477591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.477611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.477880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.477899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.478173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.478193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.478369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.478390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.478636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.478655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.478789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.478807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.479036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.479056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.479274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.479294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.479584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.479603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.479902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.479921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.480188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.480208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.480422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.480442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.480712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.480732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.480938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.480957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.481130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.481149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.481429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.481449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.481639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.481659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.481902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.481921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.482050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.482070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.482245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.482281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.482423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.482444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.482687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.482706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.482978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.482997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.483295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.483315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.483442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.483461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.483643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.483662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.483848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.483868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.484182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.484201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.484449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.484468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.484671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.484690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.484824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.484843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.485039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.485059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.485333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.485352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.485465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.485484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.485698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.485717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.485992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.486014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.486208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.486227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.813 qpair failed and we were unable to recover it. 00:30:37.813 [2024-07-16 00:56:55.486508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.813 [2024-07-16 00:56:55.486528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.486721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.486740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.486875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.486893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.487082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.487103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.487354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.487374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.487565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.487584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.487854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.487873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.488076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.488095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.488390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.488411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.488594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.488810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.488829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.489098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.489117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.489431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.489451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.489636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.489656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.489851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.489870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.490088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.490107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.490219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.490238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.490447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.490468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.490646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.490665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.490968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.490988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.491162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.491181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.491366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.491386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.491661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.491680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.491958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.491978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.492236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.492262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.492464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.492484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.492666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.492685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.492933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.492952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.493273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.493293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.493507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.493526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.493856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.493875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.494148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.494167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.494431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.494452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.494629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.494648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.494948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.494967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.495164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.495184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.495312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.495331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.495550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.495570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.814 [2024-07-16 00:56:55.495763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.814 [2024-07-16 00:56:55.495789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.814 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.496017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.496036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.496292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.496313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.496593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.496787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.496806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.497109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.497128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.497283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.497302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.497477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.497497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.497767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.497787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.497943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.497963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.498155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.498175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.498446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.498466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.498753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.498772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.499048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.499067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.499326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.499346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.499463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.499483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.499755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.499774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.499894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.499913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.500047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.500065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.500333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.500354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.500599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.500617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.500921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.500940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.501208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.501228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.501516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.501536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.501812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.501831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.501961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.501980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.502223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.502242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.502433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.502454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.502644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.502663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.502802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.502821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.502944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.502964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.503120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.503141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.503360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.503381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.503679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.503948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.503967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.504159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.504178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.504455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.504475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.504748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.504767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.505029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.505048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.505325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.505345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.505620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.815 [2024-07-16 00:56:55.505642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.815 qpair failed and we were unable to recover it. 00:30:37.815 [2024-07-16 00:56:55.505873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.505892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.506140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.506159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.506460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.506480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.506696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.506716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.506894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.506913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.507160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.507179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.507390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.507410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.507533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.507552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.507728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.507908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.507928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.508203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.508223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.508527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.508547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.508739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.508759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.509045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.509065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.509313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.509333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.509616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.509636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.509825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.509845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.510039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.510058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.510364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.510385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.510654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.510677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.510857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.510877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.511066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.511086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.511214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.511233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.511513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.511534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.511813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.511832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.512032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.512051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.512333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.512354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.512631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.512650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.512849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.512868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.513145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.513164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.513279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.513299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.513507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.513743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.513762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.513965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.513985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.514163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.514183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.514430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.514450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.514654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.514673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.514889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.514908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.515037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.515057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.515180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.515200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.816 qpair failed and we were unable to recover it. 00:30:37.816 [2024-07-16 00:56:55.515481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.816 [2024-07-16 00:56:55.515501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.515701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.515721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.515846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.515866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.516190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.516210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.516463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.516484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.516679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.516700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.516971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.516990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.517328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.517349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.517629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.517648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.517854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.517875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.518142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.518161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.518379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.518399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.518665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.518685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.518868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.518888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.519136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.519172] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.817 [2024-07-16 00:56:55.519238] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.817 [2024-07-16 00:56:55.519270] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.817 [2024-07-16 00:56:55.519290] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.817 [2024-07-16 00:56:55.519306] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.817 [2024-07-16 00:56:55.519437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.519458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.519451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:37.817 [2024-07-16 00:56:55.519565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:37.817 [2024-07-16 00:56:55.519721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.519739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.519678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:37.817 [2024-07-16 00:56:55.519683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:37.817 [2024-07-16 00:56:55.519938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.520165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.520184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.520397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.520417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.520710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.520729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.520996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.521016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.521294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.521314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.521498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.521519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.521712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.521731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.521982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.522002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.522300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.522321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.522525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.522543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.522743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.522762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.522976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.522995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.523245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.523273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.523458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.523478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.523810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.523830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.524014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.524034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.524304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.524607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.524627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.524809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.817 [2024-07-16 00:56:55.525047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.817 [2024-07-16 00:56:55.525067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.817 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.525280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.525301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.525608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.525628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.525849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.525868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.526148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.526167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.526449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.526470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.526654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.526674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.526939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.526959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.527166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.527186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.527464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.527484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.527738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.527941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.527961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.528160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.528181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.528462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.528483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.528734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.528755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.529005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.529024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.529207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.529227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.529493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.529530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.529713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.529732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.530014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.530034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.530167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.530186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.530393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.530414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.530599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.530619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.530894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.530914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.531032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.531052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.531261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.531282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.531582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.531602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.531747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.531767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.532046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.532066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.532347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.532368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.532626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.532645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.532845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.532865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.533149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.533169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.818 [2024-07-16 00:56:55.533367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.818 [2024-07-16 00:56:55.533388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.818 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.533664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.533684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.533965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.534226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.534246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.534413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.534433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.534590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.534611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.534890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.534915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.535098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.535118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.535339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.535359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.535611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.535631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.535820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.535841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.536065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.536084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.536273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.536294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.536497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.536517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.536768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.536788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.537039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.537059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.537241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.537268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.537521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.537541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.537748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.537768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.538051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.538071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.538278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.538299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.538495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.538515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.538664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.538684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.538988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.539263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.539284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.539468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.539488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.539792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.539812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.540003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.540024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.540285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.540306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.540530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.540550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.540822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.540842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.541067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.541088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.541347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.541368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.541656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.541678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.541876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.541896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.542170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.542190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.542442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.542464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.542767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.542787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.543062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.543084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.543380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.543401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.543618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.543640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.819 qpair failed and we were unable to recover it. 00:30:37.819 [2024-07-16 00:56:55.543916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.819 [2024-07-16 00:56:55.543936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.544211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.544232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.544426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.544446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.544679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.544700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.544889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.544910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.545239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.545276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.545499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.545520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.545796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.545817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.546142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.546162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.546416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.546438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.546692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.546711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.546986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.547007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.547290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.547311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.547566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.547586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.547863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.547884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.548113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.548134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.548367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.548387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.548694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.548715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.548974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.548995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.549132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.549153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.549357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.549377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.549657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.549678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.549934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.549955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.550198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.550219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.550362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.550382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.550619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.550639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.550919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.550940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.551227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.551250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.551474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.551495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.551756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.551777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.552035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.552055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.552265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.552286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.552498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.552519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.552742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.552762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.552970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.552992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.553278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.553300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.553496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.553517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.553792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.553813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.553954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.553974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.554274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.554296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.554500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.820 [2024-07-16 00:56:55.554520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.820 qpair failed and we were unable to recover it. 00:30:37.820 [2024-07-16 00:56:55.554743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.554763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.554968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.554988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.555290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.555311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.555512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.555532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.555807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.555832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.556083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.556103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.556387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.556409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.556625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.556645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.556907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.557198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.557218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.557442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.557464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.557748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.557768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.558027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.558048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.558230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.558250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.558460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.558480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.558734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.558755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.559019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.559040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.559335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.559356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.559670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.559692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.559999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.560020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.560242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.560270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.560524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.560545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.560748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.560769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.560981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.561001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.561305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.561327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.561628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.561649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.561827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.561847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.562100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.562120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.562325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.562346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.562531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.562552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.562756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.562777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.563061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.563082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.563342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.563363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.563640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.563661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.563845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.563865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.564062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.564083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.564366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.564386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.564667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.564704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.565017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.565053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.565395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.565429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.821 [2024-07-16 00:56:55.565674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.821 [2024-07-16 00:56:55.565706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.821 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.565930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.565955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.566165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.566186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.566466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.566487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.566762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.566788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.567108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.567128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.567324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.567344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.567569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.567588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.567865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.567885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.568089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.568109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.568312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.568332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.568645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.568667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.568976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.568997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.569181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.569202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.569466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.569486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.569693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.569712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.569906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.569926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.570215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.570235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.570512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.570532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.570730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.570750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.570994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.571014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.571293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.571313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.571499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.571519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.571769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.571789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.572043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.572063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.572344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.572365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.572613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.572633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.572761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.572781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.572981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.573001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.573282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.573302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.573579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.573599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.573879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.573900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.574023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.574043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.574267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.574289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.574580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.574600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.574800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.822 [2024-07-16 00:56:55.574820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.822 qpair failed and we were unable to recover it. 00:30:37.822 [2024-07-16 00:56:55.574946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.574966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.575241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.575269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.575553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.575572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.575751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.575771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.575999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.576018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.576308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.576329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.576525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.576545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.576755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.576774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.577052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.577076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.577324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.577344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.577533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.577553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.577733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.577752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.578032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.578053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.578304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.578324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.578652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.578672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.578968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.578988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.579100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.579120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.579400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.579420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.579673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.579692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.579939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.579959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.580218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.580237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.580445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.580465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.580720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.580740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.580938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.580958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.581154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.581174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.581401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.581422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.581737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.581756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.581980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.581999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.582277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.582297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.582578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.582598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.582789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.582809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.583062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.583081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.583363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.583383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.583655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.583674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.583942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.583961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.584175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.584195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.584389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.584409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.584650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.584669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.584974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.584993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.585244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.585285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.823 [2024-07-16 00:56:55.585553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.823 [2024-07-16 00:56:55.585573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.823 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.585850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.585870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.586117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.586137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.586362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.586382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.586524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.586544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.586654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.586673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.586921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.586939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.587119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.587138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.587401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.587425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.587652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.587672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.587895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.587915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.588114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.588134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.588344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.588363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.588560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.588579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.588852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.588871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.589067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.589086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.589285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.589305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.589502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.589521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.589797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.589817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.589931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.589951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.590133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.590153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.590332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.590353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.590606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.590626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.590926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.590947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.591246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.591273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.591535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.591555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.591853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.591874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.592056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.592077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.592271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.592291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.592537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.592556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.592750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.592770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.592948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.592969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.593266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.593287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.593564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.593584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.593871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.593891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.594083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.594106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.594293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.594314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.594617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.594636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.594938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.594958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.595101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.595120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.595305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.595596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.595617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.824 qpair failed and we were unable to recover it. 00:30:37.824 [2024-07-16 00:56:55.595865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.824 [2024-07-16 00:56:55.595885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.596063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.596083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.596275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.596296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.596596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.596615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.596920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.596940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.597190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.597210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.597472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.597493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.597694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.597989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.598010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.598237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.598263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.598544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.598564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.598835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.598855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.599120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.599139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.599314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.599335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.599544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.599565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.599707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.599728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.599928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.599949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.600075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.600096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.600343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.600364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.600614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.600634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.600915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.600937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.601116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.601135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.601406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.601427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.601673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.601693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.601922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.601943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.602221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.602241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.602429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.602450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.602626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.602646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.602771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.602791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.603006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.603026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.603281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.603301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.603490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.603510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.603809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.603829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.604094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.604117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.604411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.604432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.604705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.604726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.604927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.604947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.605247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.605273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.605456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.605476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.605762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.605781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.605972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.605992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.606244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.606270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.606545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.606565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.606751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.825 qpair failed and we were unable to recover it. 00:30:37.825 [2024-07-16 00:56:55.607045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.825 [2024-07-16 00:56:55.607065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.607314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.607334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.607582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.607601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.607865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.607885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.608152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.608171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.608367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.608390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.608664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.608684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.608829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.608848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.609115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.609134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.609406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.609428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.609612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.609632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.609826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.609846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.610126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.610146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.610399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.610420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.610691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.610712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.610905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.610925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.611205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.611225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.611433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.611453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.611632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.611837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.611857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.612010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.612029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.612206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.612227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.612414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.612434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.612749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.612768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.613025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.613044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.613236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.613262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.613492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.613512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.613770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.613790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.613987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.614007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.614311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.614335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.614560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.614579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.614695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.614714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.614855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.614874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.614999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.615018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.615167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.615186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.615433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.615453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.615664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.615684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.615903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.616098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.826 [2024-07-16 00:56:55.616118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.826 qpair failed and we were unable to recover it. 00:30:37.826 [2024-07-16 00:56:55.616318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.827 [2024-07-16 00:56:55.616337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:37.827 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.616539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.616559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.616747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.616767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.616994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.617013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.617223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.617243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.617499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.617520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.617660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.617679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.617856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.114 [2024-07-16 00:56:55.617875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.114 qpair failed and we were unable to recover it. 00:30:38.114 [2024-07-16 00:56:55.618012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.618030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.618266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.618286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.618497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.618517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.618723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.618742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.618951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.618970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.619181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.619200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.619404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.619425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.619650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.619669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.619925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.619944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.620174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.620194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.620423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.620442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.620689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.620708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.620829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.620848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.620955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.620976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.621246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.621272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.621467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.621487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.621728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.621748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.621956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.621976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.622173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.622192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.622465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.622485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.622629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.622648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.622825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.622845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.623019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.623042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.623283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.623303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.623549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.623569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.623815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.623834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.624093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.624113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.624244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.624270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.624516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.624536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.624816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.624836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.625093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.625111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.625286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.625306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.625610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.625630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.625762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.625781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.626082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.626102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.626378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.626398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.626625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.626645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.626838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.626858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.627070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.627088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.627275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.627294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.115 [2024-07-16 00:56:55.627466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.115 [2024-07-16 00:56:55.627487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.115 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.627776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.627796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.627922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.627941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.628122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.628140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.628353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.628374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.628618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.628638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.628911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.628930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.629187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.629207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.629402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.629422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.629645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.629665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.629875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.629894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.630198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.630217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.630350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.630369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.630554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.630572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.630836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.630855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.630986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.631005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.631213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.631233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.631420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.631439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.631684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.631703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.631961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.631981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.632261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.632281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.632556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.632576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.632821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.632844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.633045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.633064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.633325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.633345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.633524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.633543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.633666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.633686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.633826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.633845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.634033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.634054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.634336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.634355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.634653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.634673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.634878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.634898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.635016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.635035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.635308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.635328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.635503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.635522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.635735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.635754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.636027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.636047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.636310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.636330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.636611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.636630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.636832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.636851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.637061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.116 [2024-07-16 00:56:55.637080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.116 qpair failed and we were unable to recover it. 00:30:38.116 [2024-07-16 00:56:55.637269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.637288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.637464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.637484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.637728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.637747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.637954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.637973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.638152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.638171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.638347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.638366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.638572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.638591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.638864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.638884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.639064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.639084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.639329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.639349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.639592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.639611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.639796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.639815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.640005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.640025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.640209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.640228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.640412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.640431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.640732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.640752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.641049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.641068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.641339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.641359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.641607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.641626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.641805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.641824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.642083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.642102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.642282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.642305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.642510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.642529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.642712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.642731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.642922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.642941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.643199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.643218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.643511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.643531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.643806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.643826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.644043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.644062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.644252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.644276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.644582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.644601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.644849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.644869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.645071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.645091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.645267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.645287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.645471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.645490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.117 [2024-07-16 00:56:55.645691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.117 [2024-07-16 00:56:55.645710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.117 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.645846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.645865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.646131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.646151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.646349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.646369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.646562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.646581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.646772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.646791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.647062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.647083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.647274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.647293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.647414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.647433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.647714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.647734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.648000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.648019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.648279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.648300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.648501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.648520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.648826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.648845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.649077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.649367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.649386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.649525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.649545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.649816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.649835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.650053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.650072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.650333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.650353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.650544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.650564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.650834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.650853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.651055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.651074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.651265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.651284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.651555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.651574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.651785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.651805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.652050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.652072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.652197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.652216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.652539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.652559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.652735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.652755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.653001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.653020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.653222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.653537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.653557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.653732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.653751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.653866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.653885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.654101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.654119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.654420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.654440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.654648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.654666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.654890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.654909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.655165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.655184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.655383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.118 [2024-07-16 00:56:55.655403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.118 qpair failed and we were unable to recover it. 00:30:38.118 [2024-07-16 00:56:55.655661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.655680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.655895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.655914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.656159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.656179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.656457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.656477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.656720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.656738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.656915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.656934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.657250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.657276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.657523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.657811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.657830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.658024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.658043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.658318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.658338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.658515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.658534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.658810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.658830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.659044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.659063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.659322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.659342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.659534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.659553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.659682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.659702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.659878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.659896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.660089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.660108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.660213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.660233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.660525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.660544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.660788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.660808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.661081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.661100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.661397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.661417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.661739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.661758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.661867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.661889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.662165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.662184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.662452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.662472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.119 [2024-07-16 00:56:55.662742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.119 [2024-07-16 00:56:55.662761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.119 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.662882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.662900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.663143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.663163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.663415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.663435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.663690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.663709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.663954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.663973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.664100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.664120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.664392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.664412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.664587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.664606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.664821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.664840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.664968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.664988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.665197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.665216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.665486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.665505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.665639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.665658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.665933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.665953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.666146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.666165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.666363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.666383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.666587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.666606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.666738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.666756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.667074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.667093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.667343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.667363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.667540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.667559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.667695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.667714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.667922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.667941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.668132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.668152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.668419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.668439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.668575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.668595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.668728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.668746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.668992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.669011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.669123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.669142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.669333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.669354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.669548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.669567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.669846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.669866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.670088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.670107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.670365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.670385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.670564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.670584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.670775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.670794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.671082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.671105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.671311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.671331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.671598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.671618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.671888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.671907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.120 [2024-07-16 00:56:55.672085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.120 [2024-07-16 00:56:55.672105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.120 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.672383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.672402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.672581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.672600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.672817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.672837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.673030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.673048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.673329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.673349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.673534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.673552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.673748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.673767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.673959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.673978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.674183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.674202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.674328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.674348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.674522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.674541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.674720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.674739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.674987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.675006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.675329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.675348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.675642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.675661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.675930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.675950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.676225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.676245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.676508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.676528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.676705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.676724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.676903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.676923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.677095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.677114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.677297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.677316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.677537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.677556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.677678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.677697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.677957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.677976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.678275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.678294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.678563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.678583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.678874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.678893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.679246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.679270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.679468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.679491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.679690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.679709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.679954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.679973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.680244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.680268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.680513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.680532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.680792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.680811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.681059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.681081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.681212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.681231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.681509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.681529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.681649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.681668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.681846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.121 [2024-07-16 00:56:55.681864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.121 qpair failed and we were unable to recover it. 00:30:38.121 [2024-07-16 00:56:55.682079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.682097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.682302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.682323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.682616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.682636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.682810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.682829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.683103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.683122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.683418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.683438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.683696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.683715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.683966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.683985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.684136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.684155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.684352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.684371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.684625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.684644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.684887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.684905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.685160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.685179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.685359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.685378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.685592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.685611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.685727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.685746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.685975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.685994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.686223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.686242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.686531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.686550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.686724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.686743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.687037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.687056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.687229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.687248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.687431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.687452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.687644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.687662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.687787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.687807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.688077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.688096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.688272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.688292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.688590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.688609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.688783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.688802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.689100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.689119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.689422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.689441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.689563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.689582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.689872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.689892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.690213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.690232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.690427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.690447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.690692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.690715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.122 [2024-07-16 00:56:55.690892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.122 [2024-07-16 00:56:55.690911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.122 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.691178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.691197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.691305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.691324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.691591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.691610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.691881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.691900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.692166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.692185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.692457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.692478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.692653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.692673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.692916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.692936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.693213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.693232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.693512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.693532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.693727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.693747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.693931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.693950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.694133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.694152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.694270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.694289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.694514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.694533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.694669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.694688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.694960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.694979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.695189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.695209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.695476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.695495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.695737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.695756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.696007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.696026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.696160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.696179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.696458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.696478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.696749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.696768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.697029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.697048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.697295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.697315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.697611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.697631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.697841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.697860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.698082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.698101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.698357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.698377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.698555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.698574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.698824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.698844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.699018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.699037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.699164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.699183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.123 [2024-07-16 00:56:55.699394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.123 [2024-07-16 00:56:55.699414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.123 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.699618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.699638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.699781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.699800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.700000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.700019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.700193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.700216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.700362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.700382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.700652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.700671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.700859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.700879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.701157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.701176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.701281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.701300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.701595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.701615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.701889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.701908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.702130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.702148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.702406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.702425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.702626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.702645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.702921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.703184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.703204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.703314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.703333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.703610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.703630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.703850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.703868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.704136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.704154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.704428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.704448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.704657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.704676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.704853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.704873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.705064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.705083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.705348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.705369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.705611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.705630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.705825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.705844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.706151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.706169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.706423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.706444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.706698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.706718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.706973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.706993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.707263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.707282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.707526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.707790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.707808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.708069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.708088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.708289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.124 [2024-07-16 00:56:55.708309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.124 qpair failed and we were unable to recover it. 00:30:38.124 [2024-07-16 00:56:55.708486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.708506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.708703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.708722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.708913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.708932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.709123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.709142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.709349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.709368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.709549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.709569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.709767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.709787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.709980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.710002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.710204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.710223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.710562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.710582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.710703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.710723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.710901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.711166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.711186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.711387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.711407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.711610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.711629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.711876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.711895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.712111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.712130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.712406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.712426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.712601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.712620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.712816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.712836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.713077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.713097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.713375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.713395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.713571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.713590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.713767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.713786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.714066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.714086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.714269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.714289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.714562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.714582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.714787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.714806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.715021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.715040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.715249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.715282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.715430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.715450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.715661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.715679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.715949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.715968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.716213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.716232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.716429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.716449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.716554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.716573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.716842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.716862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.717051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.717070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.717175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.717194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.717388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.717407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.125 qpair failed and we were unable to recover it. 00:30:38.125 [2024-07-16 00:56:55.717535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.125 [2024-07-16 00:56:55.717555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.717743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.717761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.718038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.718057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.718357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.718377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.718495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.718514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.718785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.718804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.718997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.719016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.719314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.719335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.719514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.719534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.719726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.719745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.720007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.720029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.720277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.720297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.720592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.720613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.720882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.720902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.721199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.721219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.721449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.721469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.721596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.721615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.721807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.721827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.722104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.722123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.722419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.722439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.722685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.722705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.722838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.722858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.723133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.723152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.723357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.723376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.723647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.723666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.723987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.724007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.724219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.724239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.724535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.724555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.724734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.724753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.724975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.724995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.725266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.725286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.725411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.725430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.725555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.725575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.725819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.725838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.726100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.726123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.726419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.726439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.726707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.726727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.726931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.726951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.727227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.727246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.727503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.727523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.727767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.126 [2024-07-16 00:56:55.727787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.126 qpair failed and we were unable to recover it. 00:30:38.126 [2024-07-16 00:56:55.727985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.728004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.728248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.728476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.728496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.728792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.728811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.729065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.729084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.729345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.729365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.729634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.729653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.729850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.729869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.730076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.730095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.730214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.730234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.730526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.730546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.730785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.730803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.731025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.731286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.731306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.731580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.731598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.731790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.731810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.732004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.732022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.732319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.732340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.732586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.732605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.732723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.732741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.732944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.732963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.733077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.733097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.733274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.733294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.733517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.733536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.733709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.733727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.734031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.734050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.734233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.734251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.734408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.734428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.734604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.734624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.734916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.734935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.735206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.735225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.735433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.735453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.735700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.735719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.735993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.736015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.736201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.736220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.736415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.736435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.736628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.736646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.736894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.736913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.737212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.737231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.737446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.737466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.127 [2024-07-16 00:56:55.737663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.127 [2024-07-16 00:56:55.737682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.127 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.737860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.737879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.738098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.738117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.738301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.738321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.738571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.738590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.738786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.738805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.738998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.739018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.739142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.739161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.739357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.739377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.739659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.739678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.739803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.739823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.740085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.740103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.740414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.740434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.740728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.740748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.740943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.740963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.741232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.741252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.741537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.741556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.741699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.741718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.741957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.741976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.742203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.742222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.742436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.742456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.742728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.742747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.742971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.742990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.743171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.743190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.743440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.743461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.743777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.743796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.744135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.744154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.744435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.744458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.744705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.744725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.744918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.744937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.745215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.745234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.745512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.745532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.745860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.745881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.746162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.746184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.746302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.746321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.746517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.128 [2024-07-16 00:56:55.746536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.128 qpair failed and we were unable to recover it. 00:30:38.128 [2024-07-16 00:56:55.746748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.746767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.747069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.747088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.747397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.747417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.747674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.747693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.747948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.747967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.748151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.748170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.748399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.748418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.748618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.748638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.748922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.749290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.749309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.749516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.749535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.749740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.749759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.750015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.750034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.750232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.750252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.750386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.750406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.750671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.750689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.750982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.751001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.751327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.751347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.751541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.751560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.751738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.751757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.751900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.751919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.752138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.752157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.752355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.752374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.752590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.752791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.752810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.753011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.753031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.753308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.753328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.753547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.753566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.753713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.753733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.753948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.753969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.754216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.754236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.754516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.754595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.754872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.754907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.755194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.755225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.755565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.755597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.755799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.755829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.756127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.756158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.756458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.756789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.756820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.757139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.757169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.757328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.129 [2024-07-16 00:56:55.757361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.129 qpair failed and we were unable to recover it. 00:30:38.129 [2024-07-16 00:56:55.757567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.757598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.757867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.757899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.758099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.758121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.758423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.758444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.758570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.758589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.758868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.758887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.759076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.759096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.759290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.759310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.759429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.759448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.759662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.759680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.759983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.760002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.760287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.760499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.760519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.760717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.760736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.761035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.761054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.761325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.761345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.761604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.761623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.761897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.761917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.762210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.762229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.762360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.762380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.762582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.762600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.762831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.762850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.762998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.763017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.763205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.763228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.763470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.763491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.763679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.763698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.763957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.763976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.764223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.764243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.764448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.764468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.764665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.764685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.764898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.764917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.765202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.765222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.765490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.765509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.765787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.765806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.766086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.766105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.766430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.766450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.766642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.766662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.766919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.766938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.767116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.767135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.767380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.767400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.130 [2024-07-16 00:56:55.767644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.130 [2024-07-16 00:56:55.767664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.130 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.767859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.767879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.768077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.768286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.768510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.768668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.768816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.768989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.769009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.769141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.769161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.769412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.769431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.769651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.769670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.769886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.769905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.770104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.770123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.770236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.770271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.770447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.770466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.770663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.770682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.770888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.770908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.771096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.771116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.771302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.771322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.771500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.771520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.771660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.771680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.771874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.771893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.772132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.772151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.772320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.772344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.772480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.772499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.772708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.772727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.773000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.773020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.773194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.773212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.773470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.773490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.773791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.773810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.774002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.774024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.774288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.774308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.774503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.774523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.774767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.774788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.775078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.775099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.775345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.775366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.775501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.775520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.775736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.775756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.775945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.775964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.776242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.776268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.776442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.131 [2024-07-16 00:56:55.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.131 qpair failed and we were unable to recover it. 00:30:38.131 [2024-07-16 00:56:55.776589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.776801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.776819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.777139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.777159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.777409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.777429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.777702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.777721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.777947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.777967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.778156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.778176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.778408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.778428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.778697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.778717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.779028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.779048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.779247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.779276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.779460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.779481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.779617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.779636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.779901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.779920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.780109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.780128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.780324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.780344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.780559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.780578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.780725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.780745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.780939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.781161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.781181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.781407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.781428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.781620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.781639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.781986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.782008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.782281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.782301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.782488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.782507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.782645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.782665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.782853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.782873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.783145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.783165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.783358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.783377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.783576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.783595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.783740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.783765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.783971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.783991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.784194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.784214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.784450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.784470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.784666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.784685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.784912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.132 [2024-07-16 00:56:55.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.132 qpair failed and we were unable to recover it. 00:30:38.132 [2024-07-16 00:56:55.785223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.785243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.785440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.785460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.785669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.785688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.785983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.786002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.786206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.786225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.786533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.786553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.786677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.786696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.786884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.786904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.787109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.787129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.787426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.787446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.787623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.787643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.787819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.787837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.788085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.788104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.788410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.788430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.788698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.788717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.788940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.788959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.789152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.789171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.789439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.789459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.789666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.789684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.790005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.790024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.790323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.790343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.790565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.790584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.790829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.790848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.790984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.791003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.791196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.791215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.791401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.791422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.791637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.791660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.791859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.791878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.792081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.792101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.792296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.792316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.792573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.792592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.792731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.792750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.793048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.793068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.793266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.793285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.793560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.793579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.793773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.794080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.794098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.794370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.794667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.794685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.794885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.794905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.133 [2024-07-16 00:56:55.795183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.133 [2024-07-16 00:56:55.795202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.133 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.795329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.795348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.795590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.795609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.795852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.795871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.796098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.796117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.796382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.796402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.796602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.796621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.796812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.796830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.797018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.797037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.797336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.797356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.797533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.797551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.797792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.797812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.798027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.798046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.798227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.798247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.798523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.798543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.798710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.798729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.798967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.798986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.799159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.799178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.799354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.799373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.799626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.799646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.799789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.799808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.800100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.800119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.800330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.800350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.800529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.800548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.800838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.800857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.801066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.801086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.801199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.801222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.801493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.801512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.801716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.801735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.802036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.802056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.802278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.802298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.802549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.802568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.802850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.802869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.803145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.803164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.803388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.803408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.803523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.803541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.803809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.803829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.804011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.804030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.804300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.804320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.804501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.804520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.804720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.134 [2024-07-16 00:56:55.804739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.134 qpair failed and we were unable to recover it. 00:30:38.134 [2024-07-16 00:56:55.804948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.804967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.805162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.805181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.805456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.805476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.805659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.805678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.805818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.805837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.806035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.806055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.806229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.806249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.806518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.806537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.806676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.806695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.806913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.806931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.807198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.807217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.807436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.807455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.807617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.807636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.807767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.807787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.808030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.808050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.808324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.808344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.808530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.808549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.808792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.808811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.809026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.809044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.809286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.809306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.809528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.809547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.809769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.809789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.809919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.809938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.810184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.810204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.810403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.810422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.810690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.810713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.810842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.810862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.811062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.811081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.811272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.811291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.811561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.811581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.811794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.811812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.811938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.811957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.812095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.812115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.812318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.812337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.812540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.812559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.812697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.812717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.812903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.812922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.813114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.813391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.813411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.813607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.813626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.135 [2024-07-16 00:56:55.813838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.135 [2024-07-16 00:56:55.813857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.135 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.814090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.814298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.814319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.814458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.814477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.814601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.814619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.814796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.814815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.815009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.815028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.815296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.815316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.815501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.815521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.815792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.815810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.816017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.816037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.816296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.816318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.816523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.816544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.816790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.816809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.817081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.817100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.817370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.817607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.817626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.817889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.817909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.818175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.818194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.818411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.818431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.818626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.818646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.818954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.818973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.819167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.819186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.819412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.819431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.819566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.819585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.819849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.819872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.820012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.820030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.820276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.820296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.820527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.820546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.820725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.820746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.820951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.820970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.821178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.821197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.821470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.821490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.821692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.821711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.821903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.821923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.822184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.136 [2024-07-16 00:56:55.822204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.136 qpair failed and we were unable to recover it. 00:30:38.136 [2024-07-16 00:56:55.822312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.822331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.822611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.822631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.822761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.822780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.822905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.822925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.823112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.823131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.823400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.823420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.823625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.823644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.823767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.823787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.824003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.824022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.824291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.824311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.824574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.824810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.824828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.825116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.825135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.825367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.825388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.825546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.825566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.825773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.825792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.826013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.826033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.826243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.826270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.826544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.826563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.826753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.826772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.827096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.827116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.827388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.827591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.827610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.827816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.827836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.828062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.828081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.828302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.828322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.828518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.828537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.828741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.828759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.828975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.828994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.829186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.829209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.829406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.829426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.829617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.829636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.829880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.829899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.830979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.830999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.831267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.831286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.831480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.137 [2024-07-16 00:56:55.831500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.137 qpair failed and we were unable to recover it. 00:30:38.137 [2024-07-16 00:56:55.831674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.831693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.831868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.831887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.831997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.832016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.832285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.832305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.832514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.832533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.832652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.832671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.832802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.832820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.833014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.833034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.833208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.833228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.833461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.833482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.833751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.833770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.833992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.834011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.834268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.834287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.834560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.834580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.834703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.834721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.834972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.835014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.835272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.835306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.835530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.835561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.835782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.835813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.836046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.836078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.836350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.836382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.836649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.836681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.836842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.837065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.837088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.837344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.837364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.837558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.837577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.837712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.837731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.837986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.838005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.838261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.838281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.838545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.838565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.838815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.838834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.839055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.839074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.839238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.839274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.839520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.839539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.839665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.839684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.839813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.839833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.840010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.840030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.840275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.840296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.840514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.840533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.840729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.840748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.138 [2024-07-16 00:56:55.840993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.138 [2024-07-16 00:56:55.841012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.138 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.841166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.841185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.841439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.841459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.841680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.841699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.841902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.841920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.842230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.842250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.842477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.842495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.842674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.842693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.842922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.842942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.843187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.843206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.843327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.843346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.843479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.843633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.843653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.843788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.843809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.844042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.844061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.844308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.844540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.844559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.844835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.845054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.845074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.845294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.845315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.845526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.845546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.845755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.845774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.845964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.845983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.846108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.846127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.846433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.846453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.846649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.846668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.846849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.846868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.846972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.846992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.847237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.847263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.847386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.847405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.847535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.847554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.847764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.847783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.848074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.848093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.848312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.848333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.848518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.848538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.848763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.848782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.849048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.849068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.849262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.849281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.849404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.849423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.849640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.849659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.849854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.849874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.139 [2024-07-16 00:56:55.850146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.139 [2024-07-16 00:56:55.850166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.139 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.850447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.850467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.850664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.850683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.850862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.850881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.851145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.851164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.851335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.851354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.851534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.851553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.851832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.851852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.852120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.852140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.852328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.852348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.852545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.852564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.852788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.852807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.853115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.853135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.853330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.853350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.853548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.853571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.853781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.853800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.854020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.854039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.854184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.854203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.854430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.854450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.854638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.854658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.854801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.854821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.855134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.855153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.855330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.855349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.855538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.855557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.855744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.855763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.856027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.856047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.856346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.856366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.856502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.856521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.856720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.856740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.856928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.856947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.857087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.857106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.857306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.857325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.857570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.857589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.857709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.857728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.858007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.858027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.858227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.858246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.140 [2024-07-16 00:56:55.858503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.140 [2024-07-16 00:56:55.858523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.140 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.858666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.858685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.858880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.858898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.859092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.859111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.859366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.859386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.859639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.859658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.859848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.859866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.860082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.860101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.860305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.860325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.860456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.860596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.860615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.860761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.860780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.861002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.861021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.861266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.861286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.861490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.861509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.861736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.861755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.861893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.861912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.862141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.862160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.862428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.862451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.862575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.862596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.862798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.863023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.863042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.863323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.863342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.863616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.863635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.863768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.863984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.864193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.864212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.864470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.864491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.864679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.864699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.864890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.864909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.865116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.865135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.865315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.865334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.865513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.865532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.865716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.865735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.865985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.866004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.866314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.866334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.866586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.866606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.866763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.866969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.866988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.867265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.867285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.867478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.141 [2024-07-16 00:56:55.867497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.141 qpair failed and we were unable to recover it. 00:30:38.141 [2024-07-16 00:56:55.867688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.867708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.867819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.868009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.868029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.868264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.868284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.868477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.868497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.868684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.868702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.869012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.869031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.869266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.869284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.869519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.869538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.869723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.869743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.869960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.869979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.870183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.870203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.870417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.870438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.870719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.870739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.870926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.870945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.871203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.871223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.871452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.871472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.871747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.871769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.871987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.872006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.872279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.872298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.872556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.872576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.872772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.872791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.873008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.873026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.873224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.873244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.873532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.873552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.873797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.873816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.874043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.874063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.874315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.874335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.874535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.874553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.874702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.874721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.874916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.874935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.875208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.875227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.875522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.875542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.875837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.875856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.876172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.876191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.876413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.876433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.876620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.876639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.876775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.876794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.877069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.877088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.877332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.877352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.142 [2024-07-16 00:56:55.877484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.142 [2024-07-16 00:56:55.877504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.142 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.877693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.877713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.878007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.878027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.878207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.878226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.878530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.878550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.878755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.878774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.878966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.878985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.879176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.879317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.879336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.879524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.879543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.879802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.879821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.880122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.880141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.880397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.880418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.880596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.880615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.880787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.880806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.881019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.881038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.881249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.881276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.881473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.881496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.881700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.881718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.881836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.881856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.882128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.882355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.882375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.882619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.882638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.882852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.882872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.883110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.883130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.883450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.883470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.883602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.883620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.883810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.883829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.884009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.884028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.884280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.884300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.884493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.884512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.884693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.884712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.885008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.885027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.885241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.885266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.885462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.885481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.885626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.885646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.885792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.885812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.886139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.886158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.886427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.886447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.886701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.886720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.886970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.886988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.143 [2024-07-16 00:56:55.887272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.143 [2024-07-16 00:56:55.887292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.143 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.887511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.887531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.887774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.887793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.888073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.888143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.888466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.888503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.888727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.888759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.889044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.889075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.889311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.889342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.889619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.889651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.889936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.889968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.890264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.890296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.890506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.890528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.890728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.890747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.891035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.891053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.891326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.891488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.891507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.891620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.891642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.891824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.891843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.892042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.892062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.892317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.892337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.892475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.892494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.892788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.892808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.893011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.893030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.893276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.893296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.893597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.893616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.893918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.893938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.894182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.894201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.894374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.894394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.894589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.894608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.894743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.894762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.895085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.895105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.895349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.895369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.895666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.895685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.895919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.895937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.896111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.896130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.896310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.896330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.896502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.896522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.896712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.896731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.896942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.896962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.897204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.897222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.897531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.897551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.144 qpair failed and we were unable to recover it. 00:30:38.144 [2024-07-16 00:56:55.897795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.144 [2024-07-16 00:56:55.897814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.897944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.897964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.898266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.898286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.898543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.898834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.898853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.898976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.898996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.145 qpair failed and we were unable to recover it. 00:30:38.145 [2024-07-16 00:56:55.899135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.145 [2024-07-16 00:56:55.899155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.899352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.899372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.899564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.899584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.899773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.899793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.900046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.900066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.900327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.900347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.900484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.900502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.900693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.900713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.900921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.900940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.901132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.901155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.901407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.901427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.901565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.901584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.901852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.901871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.902065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.902086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.902313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.902333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.902440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.902459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.902725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.902744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.902998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.903017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.903141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.903160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.903373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.903393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.903641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.903660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.903880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.903899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.904208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.904228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.904425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.904445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.904640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.904658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.904846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.904865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.905181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.905200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.905393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.905413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.905541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.905561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.905681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.905700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.905918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.905937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.906204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.906223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.906456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.906476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.906671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.906690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.906980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.906999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.907199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.907219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.907479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.907499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.907638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.146 [2024-07-16 00:56:55.907657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.146 qpair failed and we were unable to recover it. 00:30:38.146 [2024-07-16 00:56:55.907768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.907787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.908123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.908142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.908415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.908434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.908623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.908839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.908859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.909082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.909101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.909345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.909365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.909561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.909580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.909888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.909908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.910096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.910116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.910387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.910407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.910528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.910551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.910774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.910794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.910903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.910922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.911169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.911187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.911401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.911422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.911540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.911559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.911737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.911756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.911880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.911899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.912022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.912041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.912228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.912248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.912510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.912529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.912715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.912733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.912956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.912975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.913241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.913474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.913493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.913788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.913808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.913932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.913951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.914171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.914189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.914391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.914411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.914636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.914655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.914862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.914881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.915081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.915100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.915223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.915242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.915462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.915482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.915702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.915721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.915924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.915943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.916265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.916284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.916566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.916585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.916808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.916827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.916981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.147 [2024-07-16 00:56:55.917000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.147 qpair failed and we were unable to recover it. 00:30:38.147 [2024-07-16 00:56:55.917194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.917213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.917446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.917465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.917651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.917670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.917855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.917873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.918165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.918184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.918460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.918480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.918677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.918696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.918980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.918998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.919272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.919291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.919485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.919630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.919653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.919846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.919865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.920072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.920091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.920284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.920303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.920497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.920516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.920775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.920794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.921004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.921024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.921278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.921297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.921475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.921494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.921637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.921655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.921959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.921978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.922162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.922180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.922382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.922401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.922549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.922566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.922704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.922724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.922975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.922994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.923211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.923230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.923422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.923442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.923669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.923688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.923864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.923882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.924153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.924173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.924296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.924316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.924499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.924518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.924653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.924671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.924811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.924830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.925132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.925151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.925285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.925305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.925491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.925509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.925696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.925715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.925979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.925999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.148 [2024-07-16 00:56:55.926191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.148 [2024-07-16 00:56:55.926210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.148 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.926438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.926458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.926727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.926747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.927039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.927058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.927340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.927360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.927567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.927586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.927733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.927753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.927952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.928163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.928182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.928457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.928477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.928679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.928703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.928883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.928901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.929174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.929194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.929408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.929428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.929550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.929569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.929744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.929763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.930071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.930090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.930384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.930405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.930606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.930625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.930871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.930890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.931010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.931029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.931227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.931473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.931493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.931629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.931648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.931830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.931849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.932133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.932151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.932353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.932371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.932512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.932530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.932794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.932814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.933924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.933944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.934237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.934272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.934473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.934493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.934746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.934813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.935134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.935168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.935455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.935490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.935777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.935809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd34000b90 with addr=10.0.0.2, port=4420 00:30:38.149 qpair failed and we were unable to recover it. 00:30:38.149 [2024-07-16 00:56:55.935959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.149 [2024-07-16 00:56:55.935981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.936293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.936313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.936566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.936585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.936707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.937064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.937083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.937345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.937365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.937636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.937655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.937844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.937863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.938000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.938020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.150 [2024-07-16 00:56:55.938124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.150 [2024-07-16 00:56:55.938144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.150 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.938381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.938401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.938589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.938610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.938731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.938942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.938961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.939202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.939221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.939448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.939690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.939709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.939986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.940132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.940339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.940486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.940648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.940806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.940824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.941047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.941067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.941245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.941271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.426 qpair failed and we were unable to recover it. 00:30:38.426 [2024-07-16 00:56:55.941404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.426 [2024-07-16 00:56:55.941422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.941593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.941612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.941809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.941829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.941939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.941957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.942146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.942166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.942375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.942394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.942541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.942561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.942753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.942771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.943055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.943075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.943350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.943370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.943545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.943564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.943758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.943781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.944034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.944053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.944355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.944375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.944524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.944543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.944727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.944746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.945052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.945071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.945268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.945289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.945582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.945602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.945717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.945736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.946011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.946030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.946241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.946266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.946445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.946465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.946710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.946730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.946981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.947001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.947193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.947212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.947344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.947364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.947536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.947556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.947880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.947900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.948023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.948042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.948344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.948365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.948623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.948642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.948784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.948803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.948927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.427 [2024-07-16 00:56:55.948947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.427 qpair failed and we were unable to recover it. 00:30:38.427 [2024-07-16 00:56:55.949192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.949211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.949443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.949462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.949590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.949608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.949743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.949763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.949872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.949893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.950099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.950119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.950308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.950328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.950456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.950656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.950676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.950914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.950933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.951062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.951080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.951350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.951370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.951641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.951660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.951866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.951884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.952140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.952159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.952358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.952377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.952519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.952797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.952819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.952960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.952979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.953168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.953186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.953313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.953333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.953461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.953480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.953672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.953691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.953877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.953896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.954177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.954196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.954414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.954434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.954627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.954646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.954922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.954941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.955185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.955205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.955430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.955450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.955575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.955594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.955770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.955790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.956175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.956194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.956468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.956487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.956694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.956713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.956898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.428 [2024-07-16 00:56:55.956918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.428 qpair failed and we were unable to recover it. 00:30:38.428 [2024-07-16 00:56:55.957126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.957145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.957365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.957384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.957526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.957545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.957767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.957786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.958021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.958040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.958220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.958240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.958469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.958489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.958676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.958695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.958915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.958935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.959117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.959136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.959327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.959347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.959494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.959513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.959704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.959724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.959946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.959966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.960177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.960196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.960389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.960408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.960584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.960603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.960789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.960808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.961023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.961297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.961317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.961513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.961532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.961667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.961688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.961888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.961908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.962194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.962213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.962479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.962499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.962673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.962692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.962883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.962903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.963042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.963060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.963185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.963204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.963472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.963492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.963736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.963754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.963969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.963988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.964165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.429 [2024-07-16 00:56:55.964185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.429 qpair failed and we were unable to recover it. 00:30:38.429 [2024-07-16 00:56:55.964378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.964398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.964542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.964561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.964824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.964843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.965122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.965141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.965358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.965378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.965516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.965534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.965727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.965746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.966021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.966041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.966251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.966277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.966477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.966496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.966614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.966632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.966764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.966783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.967029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.967048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.967222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.967242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.967524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.967543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.967726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.967745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.968057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.968253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.968290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.968426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.968445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.968707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.968725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.968979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.968998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.969176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.969194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.969429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.969449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.969661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.969680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.969950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.969970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.970173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.970301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.970523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.970667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.970818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.970993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.971136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.971284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.971429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.971622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.971946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.971966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.972181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.972200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.972342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.972362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.972494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.430 [2024-07-16 00:56:55.972512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.430 qpair failed and we were unable to recover it. 00:30:38.430 [2024-07-16 00:56:55.972645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.972664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.972781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.972801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.972922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.972941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.973860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.973879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.974094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.974114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.974360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.974380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.974595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.974615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.974723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.974742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.974876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.974895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.975961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.975981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.976175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.976194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.976333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.976352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.976458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.976477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.976697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.976716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.976904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.976923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.977052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.977368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.977491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.977627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.431 [2024-07-16 00:56:55.977777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.431 qpair failed and we were unable to recover it. 00:30:38.431 [2024-07-16 00:56:55.977973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.977993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.978859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.978878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.979068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.979087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.979215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.979551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.979571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.979764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.979783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.979977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.979996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.980915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.980933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.981109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.981129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.981304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.981500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.981518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.981702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.981722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.981964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.981983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.982101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.982120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.982228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.982246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.982498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.982518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.982793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.982813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.983958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.983978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.984966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.984986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.985094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.985116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.432 qpair failed and we were unable to recover it. 00:30:38.432 [2024-07-16 00:56:55.985250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.432 [2024-07-16 00:56:55.985275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.985389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.985408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.985618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.985638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.985776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.985795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.985985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.986242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.986404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.986615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.986752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.986875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.986893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.987938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.987957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.988972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.988991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.989115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.989134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.989235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.989274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.989458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.989477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.989725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.989744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.989932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.989951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.990921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.990940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.991047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.991066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.991204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.991223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.991464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.991484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.991687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.991706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.991887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.991909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.433 [2024-07-16 00:56:55.992029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.433 [2024-07-16 00:56:55.992048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.433 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.992238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.992265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.992399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.992497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.992696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.992716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.992981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.993968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.993987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.994109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.994128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.994445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.994464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.994707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.994726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.994915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.994933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.995139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.995158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.995341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.995560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.995578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.995763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.995782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.995906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.995924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.996104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.996124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.996261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.996281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.996484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.996502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.996622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.996640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.996835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.996853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.997792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.997987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.998130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.998394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.998603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.998734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.998946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.998965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.999151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.999170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.999365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.999388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.999514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.999534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.434 qpair failed and we were unable to recover it. 00:30:38.434 [2024-07-16 00:56:55.999708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.434 [2024-07-16 00:56:55.999727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:55.999903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:55.999922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.000979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.000998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.001115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.001135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.001303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.001322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.001505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.001525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.001645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.001664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.001919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.001938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.002148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.002294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.002431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.002641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.002852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.002981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.003100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.003228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.003465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.003672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.003886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.003904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.004847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.004866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.005039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.005059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.005161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.005180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.005362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.005382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.005574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.435 [2024-07-16 00:56:56.005594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.435 qpair failed and we were unable to recover it. 00:30:38.435 [2024-07-16 00:56:56.005709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.005727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.005901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.005920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.006928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.006947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.007124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.007143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.007334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.007354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.007544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.007562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.007678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.007696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.007817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.007836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.008847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.008866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.009137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.009156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.009319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.009339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.009512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.009532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.009719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.009738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.009920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.009939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.010084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.010102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.010297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.010316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.010443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.010624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.010643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.010884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.010903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.011918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.011936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.012151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.012170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.012288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.012308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.012433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.012451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.012704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.012723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.012899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.012918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.436 qpair failed and we were unable to recover it. 00:30:38.436 [2024-07-16 00:56:56.013092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.436 [2024-07-16 00:56:56.013111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.013229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.013249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.013388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.013407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.013536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.013561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.013666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.013685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.013934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.013953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.014923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.014942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.015052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.015071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.015284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.015304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.015439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.015460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.015653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.015673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.015916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.015935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.016959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.016979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.017170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.017189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.017405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.017425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.017609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.017629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.017844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.017864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.018048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.018067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.018262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.018281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.018470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.018488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.018608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.018627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.018812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.018831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.019937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.019956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.020068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.020088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.437 qpair failed and we were unable to recover it. 00:30:38.437 [2024-07-16 00:56:56.020200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.437 [2024-07-16 00:56:56.020220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.020367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.020386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.020501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.020519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.020638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.020661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.020782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.020801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.020918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.020937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.021131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.021282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.021421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.021564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.021803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.021997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.022870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.022999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.023207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.023419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.023632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.023771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.023894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.023913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.024952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.024971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.025951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.025970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.438 [2024-07-16 00:56:56.026884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.438 [2024-07-16 00:56:56.026903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.438 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.027884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.027904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.028951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.028969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.029969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.029988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.030873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.030892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.031073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.031092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.031335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.031355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.031538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.031557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.031683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.031702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.031835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.031854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.032950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.032969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.033079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.033098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.033207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.033227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.033422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.033442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.033551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.439 [2024-07-16 00:56:56.033570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.439 qpair failed and we were unable to recover it. 00:30:38.439 [2024-07-16 00:56:56.033675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.033694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.033824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.033844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.034859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.034879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.035897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.035916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.036836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.036855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.037808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.037989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.038129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.038328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.038472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.038678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.038886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.038905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.039114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.039134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.039379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.039400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.440 [2024-07-16 00:56:56.039519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.440 [2024-07-16 00:56:56.039538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.440 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.039657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.039677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.039824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.040904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.040923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.041101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.041120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.041243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.041270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.041466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.041486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.041695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.041714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.042805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.042996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.043811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.043992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.044119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.044332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.044548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.044702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.044839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.044858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.045067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.045086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.045300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.045319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.045437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.045456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.045647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.045669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.045854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.045873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.046016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.046035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.046223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.046242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.046359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.046378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.046489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.046508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.441 qpair failed and we were unable to recover it. 00:30:38.441 [2024-07-16 00:56:56.046688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.441 [2024-07-16 00:56:56.046707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.046902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.046922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.047890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.047909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.048940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.048959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.049890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.049909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.050819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.050838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.051975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.051994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.052177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.052196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.052399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.052418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.052596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.052618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.052805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.052824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.053001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.053021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.053158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.053177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.053298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.053319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.442 [2024-07-16 00:56:56.053565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.442 [2024-07-16 00:56:56.053585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.442 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.053772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.053791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.053934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.053953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.054834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.054854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.055977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.055995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.056898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.056917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.057945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.057964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.058955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.443 [2024-07-16 00:56:56.058975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.443 qpair failed and we were unable to recover it. 00:30:38.443 [2024-07-16 00:56:56.059103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.059241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.059444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.059598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.059731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.059857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.059877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.060973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.060992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.061194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.061318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.061448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.061597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.061791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.061981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.062949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.062967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.063887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.063906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.064014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.064033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.064203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.064222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.064348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.064368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.444 qpair failed and we were unable to recover it. 00:30:38.444 [2024-07-16 00:56:56.064557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.444 [2024-07-16 00:56:56.064576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.064763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.064782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.064886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.064904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.065947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.065970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.066145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.066165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.066355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.066374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.066487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.066506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.066636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.066654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.066839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.066859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.067917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.067936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.068949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.068968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.069929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.445 [2024-07-16 00:56:56.069948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.445 qpair failed and we were unable to recover it. 00:30:38.445 [2024-07-16 00:56:56.070064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.070187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.070327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.070508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.070654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.070916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.070935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.071938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.071957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.072094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.072235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.072471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.072598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.072813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.072987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.073122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.073328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.073567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.073714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.073851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.073871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.074117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.074136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.074306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.074326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.074533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.074552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.074677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.074696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.074872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.074891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.075913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.075932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.076111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.076130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.076263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.076282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.076392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.076410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.076606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.076624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.076799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.076818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.077001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.077021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.446 qpair failed and we were unable to recover it. 00:30:38.446 [2024-07-16 00:56:56.077142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.446 [2024-07-16 00:56:56.077162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.077347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.077368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.077551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.077570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.077691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.077711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.077891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.077910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.078931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.079074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.079094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.079210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.079230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.079489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.079508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.079617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.079636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.079819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.079838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.080915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.080933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.081154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.081174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.081295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.081315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.081423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.081443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.081620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.081639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.081840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.081860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.082053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.082209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.082359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.082588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.447 [2024-07-16 00:56:56.082799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.447 qpair failed and we were unable to recover it. 00:30:38.447 [2024-07-16 00:56:56.082976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.082996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.083942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.083961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.084082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.084101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.084350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.084370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.084561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.084580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.084705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.084724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.084849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.084869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.085109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.085129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.085368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.085502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.085521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.085711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.085731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.085842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.085861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.086036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.086054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.086170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.086191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.086434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.086454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.086669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.086688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.086801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.086820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.087821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.087840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.088065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.088084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.088263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.088283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.088499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.088518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.088632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.088650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.088833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.088851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.089042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.089061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.089332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.089352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.089596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.089616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.089803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.089822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.089941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.089960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.090087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.090108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.090220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.448 [2024-07-16 00:56:56.090239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.448 qpair failed and we were unable to recover it. 00:30:38.448 [2024-07-16 00:56:56.090394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.090415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.090598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.090617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.090803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.090822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.090997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.091209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.091423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.091559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.091751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.091952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.091972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.092077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.092096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.092309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.092329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.092511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.092531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.092618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.092637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.092832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.092851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.093926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.093944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.094119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.094139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.094359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.094379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.094485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.094504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.094613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.094632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.094850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.094873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.449 qpair failed and we were unable to recover it. 00:30:38.449 [2024-07-16 00:56:56.095017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.449 [2024-07-16 00:56:56.095037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.095227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.095246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.095369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.095389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.095569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.095587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.095761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.095780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.095897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.095917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.096106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.096125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.096238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.096265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.096454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.096474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.096602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.096620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.096881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.096901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.097937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.097957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.098147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.098167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.098360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.098380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.098496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.098514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.098636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.098656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.098913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.098932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.099134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.099270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.099484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.099623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.099874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.099990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.100959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.100979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.101223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.101243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.101369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.101388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.101632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.101651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.101776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.101795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.450 qpair failed and we were unable to recover it. 00:30:38.450 [2024-07-16 00:56:56.101922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.450 [2024-07-16 00:56:56.101945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.102124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.102142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.102408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.102427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.102607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.102626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.102808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.102827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.103928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.103947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.104108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.104128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.104316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.104336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.104518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.104538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.104761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.104781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.105079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.105450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.105749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.105875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.105990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.106111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.106274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.106501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.106695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.106890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.106909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.107099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.107119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.107289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.107310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.107445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.107464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.107650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.107669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.107802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.107822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.108890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.108910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.109020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.109040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.109157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.109176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.451 [2024-07-16 00:56:56.109362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.451 [2024-07-16 00:56:56.109381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.451 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.109625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.109647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.109822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.109842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.109963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.109982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.110090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.110110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.110383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.110403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.110606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.110625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.110837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.110857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.111824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.111843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.112899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.112918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.113114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.113133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.113308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.113328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.113506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.113525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.113698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.113717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.113842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.113861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.114898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.114917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.115882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.115900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.452 [2024-07-16 00:56:56.116018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.452 [2024-07-16 00:56:56.116038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.452 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.116156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.116175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.116358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.116379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.116565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.116587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.116819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.116839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.116950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.116969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.117178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.117197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.117376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.117572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.117591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.117765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.117785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.117892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.117912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.118910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.119128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.119148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.119363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.119383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.119569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.119589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.119768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.119787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.119967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.119987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.120107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.120127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.120230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.120249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.120436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.120631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.120649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.120843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.120862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.121049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.121068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.121292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.121313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.121520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.121540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.121669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.121688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.121800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.121819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.122012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.122031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.122162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.122181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.122360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.122380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.122564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.122584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.122829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.122847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.123024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.123043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.123190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.123210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.123394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.123415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.453 qpair failed and we were unable to recover it. 00:30:38.453 [2024-07-16 00:56:56.123542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.453 [2024-07-16 00:56:56.123561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.123670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.123688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.123870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.123890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.124077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.124099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.124323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.124342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.124461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.124481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.124611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.124631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.124818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.124837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.125894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.125914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.126895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.126914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.127980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.127999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.128246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.128271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.128467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.128487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.128602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.128621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.128748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.128767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.128941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.128964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.129132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.129151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.129337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.129357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.129557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.454 [2024-07-16 00:56:56.129576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.454 qpair failed and we were unable to recover it. 00:30:38.454 [2024-07-16 00:56:56.129702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.129721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.129898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.129918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.130977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.130997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.131183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.131202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.131325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.131345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.131563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.131582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.131758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.131777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.131899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.131918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.132861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.132880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.133942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.133962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.134133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.134152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.134355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.134375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.134547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.134566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.134683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.134703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.134815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.134834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.135945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.135964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.136216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.136235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.455 [2024-07-16 00:56:56.136424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.455 [2024-07-16 00:56:56.136443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.455 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.136637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.136657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.136829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.136849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.136959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.136978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.137090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.137109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.137283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.137303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.137487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.137507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.137681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.137700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.137834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.137853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.138899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.138919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.139872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.139891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.140139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.140158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.140273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.140293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.140472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.140600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.140620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.140884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.140904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.141844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.141863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.142942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.142961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.143139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.143158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.143380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.456 [2024-07-16 00:56:56.143400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.456 qpair failed and we were unable to recover it. 00:30:38.456 [2024-07-16 00:56:56.143581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.143600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.143721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.143741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.143936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.143956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.144076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.144094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.144269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.144289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.144398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.144417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.144644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.144663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.144859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.144878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.145889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.145908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.146902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.146921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.147109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.147128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.147237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.147263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.147385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.147406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.147531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.147550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.147752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.147771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.148056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.148075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.148336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.148356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.148481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.148501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.148622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.148641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.148753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.148772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.149961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.149983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.150223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.150242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.150430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.150450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.150559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.150578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.150770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.150789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.151000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.457 [2024-07-16 00:56:56.151020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.457 qpair failed and we were unable to recover it. 00:30:38.457 [2024-07-16 00:56:56.151130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.151149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.151266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.151286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.151502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.151521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.151693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.151712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.151846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.151866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.152074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.152093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.152243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.152283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.152576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.152596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.152863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.152883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.153959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.153978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.154937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.154956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.155067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.155086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.155331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.155351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.155459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.155478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.155698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.155717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.155946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.155966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.156142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.156162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.156385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.156404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.156646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.156666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.156938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.156957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.157206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.157225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.157502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.157522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.157707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.157727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.157839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.157859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.458 [2024-07-16 00:56:56.157990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.458 [2024-07-16 00:56:56.158012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.458 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.158269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.158289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.158466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.158485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.158659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.158680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.158787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.158807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.158912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.158932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.159122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.159142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.159328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.159528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.159547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.159790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.159810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.159932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.159951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.160968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.161183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.161202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.161395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.161415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.161543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.161562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.161860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.161879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.162126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.162146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.162323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.162343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.162581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.162601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.162845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.162864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.162969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.162988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.163183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.163329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.163488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.163645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.163868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.163985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.164129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.164251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.164457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.164731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.164955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.164975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.165084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.165103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.165223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.165242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.165431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.165451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.165694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.165742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.459 qpair failed and we were unable to recover it. 00:30:38.459 [2024-07-16 00:56:56.165867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.459 [2024-07-16 00:56:56.165886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.165997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.166193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.166333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.166555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.166752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.166896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.166915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.167119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.167278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.167473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.167616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.167814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.167989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.168875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.168894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.169069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.169088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.169192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.169212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.169489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.169510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.169699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.169718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.169908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.169927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.170859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.171054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.171073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.171268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.171288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.171557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.171576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.171749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.171769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.171954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.171974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.172155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.172174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.172276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.172296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.172566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.172585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.172699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.172718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.460 [2024-07-16 00:56:56.172825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.460 [2024-07-16 00:56:56.172847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.460 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.172986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.173797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.173982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.174892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.174911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.175097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.175117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.175312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.175331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.175506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.175526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.175713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.175733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.176000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.176019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.176219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.176238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.176443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.176463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.176658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.176678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.176879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.176897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.177111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.177130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.177334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.177354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.177624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.177643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.177778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.177798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.177915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.177934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.178080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.178099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.178226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.178245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.178394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.178413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.178546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.178565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.178756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.178774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.179049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.179068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.179343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.179363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.179540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.179559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.179736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.179755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.180069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.180088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.180281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.461 [2024-07-16 00:56:56.180300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.461 qpair failed and we were unable to recover it. 00:30:38.461 [2024-07-16 00:56:56.180513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.180533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.180673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.180695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.180837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.180856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.180977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.180995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.181202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.181221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.181420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.181439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.181632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.181651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.181872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.181891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.182132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.182151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.182352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.182372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.182546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.182565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.182832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.182851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.183110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.183311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.183330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.183522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.183541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.183661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.183681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.183807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.183826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.184904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.184923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.185055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.185074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.185193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.185212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.185359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.185379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.185560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.185579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.185836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.185856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.186051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.186070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.186209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.186228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.186480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.186499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.186696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.462 [2024-07-16 00:56:56.186715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.462 qpair failed and we were unable to recover it. 00:30:38.462 [2024-07-16 00:56:56.186884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.186904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.187273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.187293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.187538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.187557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.187759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.187778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.187980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.187999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.188196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.188215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.188459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.188479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.188586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.188605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.188824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.188843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.189053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.189075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.189344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.189364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.189478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.189496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.189768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.189787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.189912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.189931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.190198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.190217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.190502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.190522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.190790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.190809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.190927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.190946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.191138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.191157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.191463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.191483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.191596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.191615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.191812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.191830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.192007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.192027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.192384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.192404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.192595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.192615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.192806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.192826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.192953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.192972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.193148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.193168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.193380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.193540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.193559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.193758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.193778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.194941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.194989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.195341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.195376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.195651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.195684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.195893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.463 [2024-07-16 00:56:56.195925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaafd0 with addr=10.0.0.2, port=4420 00:30:38.463 qpair failed and we were unable to recover it. 00:30:38.463 [2024-07-16 00:56:56.196166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.196188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.196410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.196430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.196697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.196716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.196854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.196873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.197057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.197076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.197350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.197370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.197594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.197614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.197876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.197896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.198158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.198177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.198430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.198453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.198645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.198664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.198874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.198893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.199089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.199108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.199386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.199406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.199587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.199606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.199807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.199826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.200113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.200132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.200315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.200335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.200533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.200552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.200726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.200744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.201032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.201052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.201232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.201251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.201493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.201512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.201762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.201782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.202058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.202077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.202350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.202369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.202550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.202568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.202761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.202779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.203063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.203083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.203332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.203351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.203546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.203564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.203809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.203828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.204104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.204124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.204339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.204358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.204500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.204518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.204699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.204719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 [2024-07-16 00:56:56.205026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.205049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.464 qpair failed and we were unable to recover it. 00:30:38.464 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.464 [2024-07-16 00:56:56.205304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.464 [2024-07-16 00:56:56.205325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:38.465 [2024-07-16 00:56:56.205465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.205484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.205692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.205712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:38.465 [2024-07-16 00:56:56.205874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:38.465 [2024-07-16 00:56:56.206083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.206102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.465 [2024-07-16 00:56:56.206359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.206379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.206575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.206595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.206912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.206933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.207209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.207228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.207554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.207573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.207714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.207733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.208063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.208082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.208349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.208369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.208636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.208843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.208863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.209061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.209081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.209327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.209347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.209529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.209548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.209827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.209847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.210147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.210167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.210465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.210486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.210765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.210784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.210958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.210977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.211209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.211228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.211457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.211476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.211749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.211768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.211905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.211924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.212061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.212080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.212209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.212228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.212452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.212472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.212665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.212685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.212921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.212940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.213156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.213176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.213430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.213450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.213608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.213627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.213871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.213890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.214967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.465 [2024-07-16 00:56:56.214986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.465 qpair failed and we were unable to recover it. 00:30:38.465 [2024-07-16 00:56:56.215235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.215263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.215470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.215490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.215677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.215925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.215945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.216175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.216195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.216442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.216594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.216757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.216777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.217018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.217038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.217301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.217322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.217511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.217532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.217730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.217750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.218025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.218045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.218174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.218194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.218384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.218403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.218547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.218566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.218709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.218728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.219936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.219955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.220134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.220155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.220360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.220380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.220649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.220669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.220880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.220899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.221070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.221089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.221405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.221425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.221569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.221588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.221722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.221741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.221954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.221974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.222114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.222134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.222362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.222382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.222504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.222522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.222647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.222671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.222888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.222907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.223190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.223210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.223407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.223427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.223591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.223610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.223739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.223758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.466 [2024-07-16 00:56:56.224042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.466 [2024-07-16 00:56:56.224061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.466 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.224296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.224316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.224479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.224499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.224695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.224714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.224904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.224923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.225150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.225170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.225384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.225405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.225653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.225673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.225872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.225892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.226176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.226195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.226475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.226495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.226625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.226644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.226768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.226788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.227022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.227041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.227252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.227278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.227434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.227453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.227709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.227728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.227856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.227875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.228074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.228368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.228388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.228574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.228594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.228787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.228807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.229180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.229200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.229437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.229457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.229633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.229652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.229773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.229791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.229977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.229996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.230115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.230135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.230324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.230344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.230536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.230555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.230691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.230710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.230998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.231017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.231265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.231286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.231481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.231500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.231632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.231655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.231809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.231828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.232090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.232109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.232368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.467 [2024-07-16 00:56:56.232391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.467 qpair failed and we were unable to recover it. 00:30:38.467 [2024-07-16 00:56:56.232536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.232555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.232691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.232713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.233021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.233040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.233338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.233358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.233486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.233507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.233614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.233633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.233831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.233849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.234050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.234069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.234176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.234195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.234442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.234461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.234594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.234614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.234754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.234774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.235977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.235996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.236195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.236215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.236339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.236359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.236493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.236512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.236694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.236713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.236853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.236873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.237083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.237102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.237290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.237309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.237455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.237474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.237616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.237635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.237827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.237847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.238119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.238139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.238357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.238378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.238501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.238520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.238647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.238667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.238950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.238969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.239124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.239144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.239333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.239353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.239478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.239497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.239721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.239746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.239886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.239905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.240092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.240112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.240396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.240416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.240549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.240569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.240778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.240798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.240942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.240961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.241136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.241155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.468 qpair failed and we were unable to recover it. 00:30:38.468 [2024-07-16 00:56:56.241362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.468 [2024-07-16 00:56:56.241381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.241531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.241551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.469 [2024-07-16 00:56:56.241669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.241690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.241796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.241815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:38.469 [2024-07-16 00:56:56.242007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.242028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.242225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.242244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.469 [2024-07-16 00:56:56.242415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.242436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.242574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.242593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.469 [2024-07-16 00:56:56.242779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.242799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.243034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.243054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.243231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.243250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.243479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.243498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.243692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.243712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.243981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.243999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.244171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.244190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.244366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.244386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.244580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.244600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.244798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.244822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.244954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.244973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.245184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.245203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.245430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.245451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.245676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.245695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.245917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.245936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.246119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.246139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.246371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.246391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.246587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.246606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.246751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.246770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.246893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.246913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.247179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.469 [2024-07-16 00:56:56.247346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.469 [2024-07-16 00:56:56.247366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.469 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.247493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.247513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.247641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.247661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.247801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.247822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.248101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.248120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.248294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.248313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.248462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.248481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.248597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.248617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.248843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.248862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.249058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.249077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.249318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.249338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.249484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.249503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.249733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.249753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.249981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.250181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.250382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.250525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.250664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.250807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.250827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.251014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.251033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.251227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.731 [2024-07-16 00:56:56.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.731 qpair failed and we were unable to recover it. 00:30:38.731 [2024-07-16 00:56:56.251471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.251490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.251686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.251706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.251851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.251870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.252129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.252149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.252345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.252364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.252543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.252562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.252748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.252768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.253032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.253232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.253251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.253492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.253511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.253653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.253672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.253857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.253876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.254072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.254092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.254283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.254304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.254548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.254567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.254711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.254730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.254950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.254970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.255220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.255239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.255452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.255471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.255602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.255622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.255758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.255777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.256048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.256067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.256341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.256361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.256481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.256500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.256636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.256656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.257027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.257047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.257251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.257277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.257407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.257427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.257540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.257560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.257758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.257779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.258043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.732 [2024-07-16 00:56:56.258064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.732 qpair failed and we were unable to recover it. 00:30:38.732 [2024-07-16 00:56:56.258251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.258278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.258424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.258444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.258644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.258663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.258801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.258821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.259913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.259933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.260190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.260210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.260461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.260483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.260732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.260752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.260894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.260913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.261182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.261202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.261393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.261414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.261615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.261638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.261787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.261806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.262971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.262991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.263119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.263138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.263384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.263406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.263689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.263710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.263978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.263997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.264269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.264289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.264435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.264454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.264585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.264604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.733 [2024-07-16 00:56:56.264783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.733 [2024-07-16 00:56:56.264802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.733 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.264979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.264999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.265252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.265279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.265506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.265525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.265722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.265741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.266026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.266046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.266365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.266385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.266573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.266592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.266868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.266888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.267078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.267271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.267291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.267567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.267587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.267851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.267871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.268030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.268049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.268274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.268294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.268503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.268522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.268709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.268729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.268921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.268940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.269117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.269136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.269381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.269401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.269672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.269691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 Malloc0 00:30:38.734 [2024-07-16 00:56:56.269829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.269848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.270050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.270070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.270347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.270367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.734 [2024-07-16 00:56:56.270589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.270608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:38.734 [2024-07-16 00:56:56.270793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.270813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.271035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.734 [2024-07-16 00:56:56.271055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.734 [2024-07-16 00:56:56.271343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.271363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.271574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.271594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.271845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.271864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.271989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.272009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.272186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.272205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.734 [2024-07-16 00:56:56.272447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.734 [2024-07-16 00:56:56.272467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.734 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.272654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.272673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.273915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.273933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.274230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.274250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.274521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.274541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.274718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.274738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.275032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.275052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.275323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.275343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.275552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.275572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.275804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.275823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.276089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.276108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.276302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.276322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.276450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.276469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.276652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.276676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.276888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.276908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.277192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.277212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.277438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.277458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.277475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.735 [2024-07-16 00:56:56.277686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.277705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.277838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.277855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.278043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.278064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.278320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.278340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.278526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.278545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.278740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.278759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.279055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.279075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.279334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.735 [2024-07-16 00:56:56.279354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.735 qpair failed and we were unable to recover it. 00:30:38.735 [2024-07-16 00:56:56.279549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.279569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.279765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.279787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.280000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.280019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.280146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.280165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.280352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.280372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.280549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.280569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.280774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.280793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.281052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.281071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.281267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.281542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.281561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.281689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.281709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.281998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.282205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.282416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.282581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.282741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.282971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.282990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.283262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.283282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.283474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.283493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.283680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.283699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.283915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.283934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.284182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.284201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.736 [2024-07-16 00:56:56.284343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.736 [2024-07-16 00:56:56.284363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.736 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.284637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.284657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.284798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.284817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.284926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.284945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.285129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.285148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.285380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.285399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.285645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.285664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.285948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.285967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.286188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.286208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.737 [2024-07-16 00:56:56.286460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.286480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.737 [2024-07-16 00:56:56.286747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.286766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.737 [2024-07-16 00:56:56.287054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.287073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.737 [2024-07-16 00:56:56.287273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.287293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.287488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.287507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.287683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.287703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.287974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.287994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.288237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.288264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.288542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.288561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.288746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.288765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.288999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.289017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.289201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.289221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.289507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.289526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.289661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.289681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.289886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.289906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.290092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.290112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.290305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.290325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.290484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.290502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.737 qpair failed and we were unable to recover it. 00:30:38.737 [2024-07-16 00:56:56.290769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.737 [2024-07-16 00:56:56.290788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.291040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.291059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.291185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.291205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.291479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.291499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.291682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.291701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.291951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.291970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.292280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.292300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.292598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.292616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.292808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.292827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.293088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.293107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.293292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.293311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.293610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.293630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.293820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.293838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.294095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.294114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.294290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.294311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.738 [2024-07-16 00:56:56.294501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.294521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:38.738 [2024-07-16 00:56:56.294769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.294792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.738 [2024-07-16 00:56:56.295030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.295049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.295187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.295206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.738 [2024-07-16 00:56:56.295415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.295435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.295629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.295648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.295845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.295864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.296042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.296062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.296331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.296364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.296553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.296572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.296847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.296867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.297059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.297273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.297292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.297410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.297429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.738 [2024-07-16 00:56:56.297615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.738 [2024-07-16 00:56:56.297634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.738 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.297808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.297827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.298118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.298137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.298408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.298428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.298633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.298652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.298900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.298919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.299120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.299139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.299384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.299403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.299647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.299666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.299846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.299865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.299978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.299998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.300299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.300319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.300585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.300604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.300821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.300843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.301010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.301029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.301327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.301347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.301622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.301641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.301931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.301951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.302168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.302187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.739 [2024-07-16 00:56:56.302382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.302403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.739 [2024-07-16 00:56:56.302659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.302679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.302924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.739 [2024-07-16 00:56:56.302944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.303193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.739 [2024-07-16 00:56:56.303212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.303427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.303446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.303750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.303769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.304044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.304063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.304264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.304283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.304527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.304546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.304717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.739 [2024-07-16 00:56:56.304736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.739 qpair failed and we were unable to recover it. 00:30:38.739 [2024-07-16 00:56:56.304964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.304983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.305275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.305296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.305478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.305497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.305627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.305646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.305938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.305956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.306174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:38.740 [2024-07-16 00:56:56.306193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efd44000b90 with addr=10.0.0.2, port=4420 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.306252] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.740 [2024-07-16 00:56:56.308190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.308350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.308381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.308395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.308408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.308439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:38.740 [2024-07-16 00:56:56.318213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.318387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.318416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.318430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.318443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.318472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.740 00:56:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3224773 00:30:38.740 [2024-07-16 00:56:56.328173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.328288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.328317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.328330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.328343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.328369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.338329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.338479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.338506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.338519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.338531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.338558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.348144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.348301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.348329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.348346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.348359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.348387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.358196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.358326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.358353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.358366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.358378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.358405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.368220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.368390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.368416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.368430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.368442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.368469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.378489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.378636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.740 [2024-07-16 00:56:56.378662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.740 [2024-07-16 00:56:56.378675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.740 [2024-07-16 00:56:56.378687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.740 [2024-07-16 00:56:56.378713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.740 qpair failed and we were unable to recover it. 00:30:38.740 [2024-07-16 00:56:56.388250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.740 [2024-07-16 00:56:56.388367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.388392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.388405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.388417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.388443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.398272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.398382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.398407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.398420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.398432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.398458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.408321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.408439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.408465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.408478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.408490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.408517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.418536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.418688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.418714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.418728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.418740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.418766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.428359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.428497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.428522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.428535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.428548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.428574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.438407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.438515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.438540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.438558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.438571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.438597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.448443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.448579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.448607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.448620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.448632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.448659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.458646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.458781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.458806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.458819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.458830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.458857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.468516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.468633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.468658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.468671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.468683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.468709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.478546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.478656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.478679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.478692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.478704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.478730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.488562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.488681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.488706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.488720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.488731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.488758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.498787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.741 [2024-07-16 00:56:56.498920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.741 [2024-07-16 00:56:56.498946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.741 [2024-07-16 00:56:56.498959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.741 [2024-07-16 00:56:56.498971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.741 [2024-07-16 00:56:56.498997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.741 qpair failed and we were unable to recover it. 00:30:38.741 [2024-07-16 00:56:56.508608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.508741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.508766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.508780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.508792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.508819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:38.742 [2024-07-16 00:56:56.518660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.518774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.518799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.518811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.518824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.518849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:38.742 [2024-07-16 00:56:56.528842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.529040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.529070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.529084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.529095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.529121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:38.742 [2024-07-16 00:56:56.539033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.539177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.539202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.539215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.539227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.539252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:38.742 [2024-07-16 00:56:56.548828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.548947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.548970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.548983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.548995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.549021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:38.742 [2024-07-16 00:56:56.558853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:38.742 [2024-07-16 00:56:56.558981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:38.742 [2024-07-16 00:56:56.559008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:38.742 [2024-07-16 00:56:56.559021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:38.742 [2024-07-16 00:56:56.559033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:38.742 [2024-07-16 00:56:56.559058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:38.742 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.568756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.568861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.568886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.568899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.568912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.568943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.579041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.579180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.579207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.579220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.579232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.579265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.588871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.588995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.589021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.589034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.589047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.589074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.598931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.599064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.599091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.599104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.599117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.599144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.608882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.608982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.609007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.609019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.609031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.609057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.619135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.619284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.619315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.619328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.619340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.619368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.628984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.629099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.629123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.629136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.629148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.629173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.639005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.639134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.639160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.639173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.639184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.639211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.649049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.649158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.649184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.649197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.649211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.649237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.659276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.659462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.659488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.659501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.659518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.001 [2024-07-16 00:56:56.659547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.001 qpair failed and we were unable to recover it. 00:30:39.001 [2024-07-16 00:56:56.669185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.001 [2024-07-16 00:56:56.669336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.001 [2024-07-16 00:56:56.669362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.001 [2024-07-16 00:56:56.669375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.001 [2024-07-16 00:56:56.669388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.669414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.679158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.679300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.679333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.679346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.679358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.679384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.689207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.689320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.689344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.689357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.689369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.689395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.699382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.699521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.699547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.699561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.699573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.699599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.709165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.709301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.709328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.709341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.709353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.709380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.719264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.719366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.719390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.719402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.719414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.719440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.729296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.729406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.729431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.729444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.729456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.729482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.739526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.739663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.739689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.739701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.739713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.739738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.749354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.749465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.749488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.749506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.749518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.749544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.759387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.759491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.759515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.759528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.759540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.759566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.769433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.769546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.769570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.769584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.769596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.769622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.779675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.779825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.779851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.779864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.779876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.779902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.789480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.789598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.789622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.789635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.789647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.002 [2024-07-16 00:56:56.789673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.002 qpair failed and we were unable to recover it. 00:30:39.002 [2024-07-16 00:56:56.799534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.002 [2024-07-16 00:56:56.799663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.002 [2024-07-16 00:56:56.799688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.002 [2024-07-16 00:56:56.799701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.002 [2024-07-16 00:56:56.799713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.003 [2024-07-16 00:56:56.799739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.003 qpair failed and we were unable to recover it. 00:30:39.003 [2024-07-16 00:56:56.809547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.003 [2024-07-16 00:56:56.809673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.003 [2024-07-16 00:56:56.809705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.003 [2024-07-16 00:56:56.809718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.003 [2024-07-16 00:56:56.809729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.003 [2024-07-16 00:56:56.809755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.003 qpair failed and we were unable to recover it. 00:30:39.003 [2024-07-16 00:56:56.819814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.003 [2024-07-16 00:56:56.819959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.003 [2024-07-16 00:56:56.819991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.003 [2024-07-16 00:56:56.820006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.003 [2024-07-16 00:56:56.820018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.003 [2024-07-16 00:56:56.820046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.003 qpair failed and we were unable to recover it. 00:30:39.003 [2024-07-16 00:56:56.829652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.003 [2024-07-16 00:56:56.829761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.003 [2024-07-16 00:56:56.829788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.003 [2024-07-16 00:56:56.829802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.003 [2024-07-16 00:56:56.829814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.003 [2024-07-16 00:56:56.829841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.003 qpair failed and we were unable to recover it. 00:30:39.261 [2024-07-16 00:56:56.839652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.261 [2024-07-16 00:56:56.839764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.261 [2024-07-16 00:56:56.839789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.261 [2024-07-16 00:56:56.839814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.261 [2024-07-16 00:56:56.839825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.261 [2024-07-16 00:56:56.839853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.261 qpair failed and we were unable to recover it. 00:30:39.261 [2024-07-16 00:56:56.849695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.261 [2024-07-16 00:56:56.849802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.849827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.849840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.849852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.849879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.859854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.859982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.860009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.860022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.860034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.860061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.869772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.869881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.869905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.869918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.869929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.869956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.879784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.879921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.879947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.879960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.879972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.879998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.889806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.889903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.889929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.889942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.889954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.889980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.900063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.900198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.900225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.900237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.900249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.900282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.909907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.910020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.910044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.910058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.910069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.910096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.919941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.920048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.920071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.920084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.920096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.920122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.929984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.930087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.930116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.930129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.930141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.930167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.940203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.940346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.940373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.940386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.940397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.940423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.950014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.950132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.950156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.950169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.262 [2024-07-16 00:56:56.950181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.262 [2024-07-16 00:56:56.950207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.262 qpair failed and we were unable to recover it. 00:30:39.262 [2024-07-16 00:56:56.960070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.262 [2024-07-16 00:56:56.960177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.262 [2024-07-16 00:56:56.960200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.262 [2024-07-16 00:56:56.960213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:56.960224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:56.960250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:56.970090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:56.970198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:56.970222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:56.970234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:56.970245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:56.970284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:56.980345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:56.980480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:56.980506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:56.980519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:56.980531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:56.980557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:56.990112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:56.990238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:56.990269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:56.990283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:56.990295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:56.990321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.000186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.000303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.000328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.000341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.000353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.000380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.010228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.010360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.010386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.010399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.010411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.010438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.020460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.020592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.020622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.020636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.020647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.020673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.030292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.030421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.030447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.030460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.030472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.030498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.040329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.040442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.040466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.040479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.040490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.040516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.050290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.050422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.050448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.050461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.050472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.050499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.060585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.060722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.060749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.060761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.060778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.060804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.070361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.070467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.070492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.070505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.070517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.070544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.263 [2024-07-16 00:56:57.080454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.263 [2024-07-16 00:56:57.080556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.263 [2024-07-16 00:56:57.080581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.263 [2024-07-16 00:56:57.080593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.263 [2024-07-16 00:56:57.080605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.263 [2024-07-16 00:56:57.080631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.263 qpair failed and we were unable to recover it. 00:30:39.264 [2024-07-16 00:56:57.090471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.264 [2024-07-16 00:56:57.090580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.264 [2024-07-16 00:56:57.090605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.264 [2024-07-16 00:56:57.090618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.264 [2024-07-16 00:56:57.090629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.264 [2024-07-16 00:56:57.090656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.264 qpair failed and we were unable to recover it. 00:30:39.523 [2024-07-16 00:56:57.100651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.523 [2024-07-16 00:56:57.100782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.523 [2024-07-16 00:56:57.100809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.523 [2024-07-16 00:56:57.100821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.523 [2024-07-16 00:56:57.100834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.523 [2024-07-16 00:56:57.100860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.523 qpair failed and we were unable to recover it. 00:30:39.523 [2024-07-16 00:56:57.110484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.523 [2024-07-16 00:56:57.110610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.523 [2024-07-16 00:56:57.110635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.523 [2024-07-16 00:56:57.110648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.523 [2024-07-16 00:56:57.110661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.523 [2024-07-16 00:56:57.110687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.523 qpair failed and we were unable to recover it. 00:30:39.523 [2024-07-16 00:56:57.120588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.523 [2024-07-16 00:56:57.120701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.523 [2024-07-16 00:56:57.120725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.523 [2024-07-16 00:56:57.120738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.523 [2024-07-16 00:56:57.120750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.523 [2024-07-16 00:56:57.120777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.523 qpair failed and we were unable to recover it. 00:30:39.523 [2024-07-16 00:56:57.130594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.523 [2024-07-16 00:56:57.130696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.523 [2024-07-16 00:56:57.130720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.523 [2024-07-16 00:56:57.130732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.523 [2024-07-16 00:56:57.130744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.523 [2024-07-16 00:56:57.130769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.523 qpair failed and we were unable to recover it. 00:30:39.523 [2024-07-16 00:56:57.140842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.523 [2024-07-16 00:56:57.141003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.141028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.141041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.141053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.141079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.150680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.150796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.150820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.150832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.150849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.150876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.160706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.160812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.160836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.160848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.160862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.160888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.170687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.170826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.170851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.170864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.170875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.170901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.180976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.181115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.181140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.181152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.181164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.181190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.190748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.190854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.190878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.190891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.190902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.190928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.200837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.200966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.200992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.201005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.201017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.201042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.210884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.210987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.211012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.211024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.211036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.211063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.221124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.221275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.221301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.221315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.221327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.221354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.230938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.231051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.231075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.231089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.231101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.231128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.241006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.241121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.241145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.241163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.241176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.241201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.251063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.524 [2024-07-16 00:56:57.251176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.524 [2024-07-16 00:56:57.251202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.524 [2024-07-16 00:56:57.251215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.524 [2024-07-16 00:56:57.251227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.524 [2024-07-16 00:56:57.251262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.524 qpair failed and we were unable to recover it. 00:30:39.524 [2024-07-16 00:56:57.261233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.261387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.261414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.261427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.261438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.261465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.271105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.271240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.271274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.271287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.271300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.271327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.281114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.281219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.281244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.281265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.281278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.281304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.291149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.291251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.291284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.291297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.291309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.291336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.301384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.301516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.301541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.301554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.301566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.301591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.311238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.311376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.311402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.311416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.311428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.311455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.321220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.321338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.321365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.321377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.321389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.321414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.331272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.331387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.331418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.331432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.331443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.331470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.341513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.341668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.341692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.341706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.341717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.341744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.525 [2024-07-16 00:56:57.351277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.525 [2024-07-16 00:56:57.351387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.525 [2024-07-16 00:56:57.351411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.525 [2024-07-16 00:56:57.351424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.525 [2024-07-16 00:56:57.351436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.525 [2024-07-16 00:56:57.351462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.525 qpair failed and we were unable to recover it. 00:30:39.785 [2024-07-16 00:56:57.361411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.785 [2024-07-16 00:56:57.361546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.785 [2024-07-16 00:56:57.361573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.361586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.361598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.361626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.371362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.371469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.371494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.371507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.371519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.371551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.381685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.381828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.381854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.381867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.381879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.381905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.391440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.391550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.391576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.391588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.391600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.391626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.401433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.401563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.401589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.401602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.401614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.401640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.411585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.411695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.411719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.411732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.411744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.411770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.421753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.421890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.421920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.421933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.421945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.421971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.431571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.431685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.431709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.431723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.431734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.431760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.441629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.441735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.441759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.441773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.441785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.441811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.451707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.451836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.451863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.451875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.451887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.451913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.461945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.462085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.462110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.462123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.462135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.462165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.471683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.471827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.471852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.471865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.471877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.471904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.481696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.481825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.481851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.481865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.481877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.481905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.491766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.491897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.491922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.491936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.491948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.491975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.502041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.502206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.502233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.502246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.502266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.502293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.511801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.511949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.511973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.511987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.511999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.512025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.521885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.521987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.522011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.522024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.522036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.522062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.531864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.786 [2024-07-16 00:56:57.531969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.786 [2024-07-16 00:56:57.531993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.786 [2024-07-16 00:56:57.532006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.786 [2024-07-16 00:56:57.532018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.786 [2024-07-16 00:56:57.532043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.786 qpair failed and we were unable to recover it. 00:30:39.786 [2024-07-16 00:56:57.542157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.542305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.542332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.542345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.542357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.542383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.551922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.552053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.552078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.552091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.552108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.552134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.562008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.562122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.562147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.562160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.562172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.562197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.572028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.572177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.572204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.572216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.572228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.572264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.582185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.582359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.582385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.582399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.582410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.582437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.592133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.592275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.592301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.592314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.592326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efd44000b90 00:30:39.787 [2024-07-16 00:56:57.592353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.602158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.602316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.602377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.602403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.602425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:39.787 [2024-07-16 00:56:57.602474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.612175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.612301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.612333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.612349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.612363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:39.787 [2024-07-16 00:56:57.612392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.787 qpair failed and we were unable to recover it. 00:30:39.787 [2024-07-16 00:56:57.622404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:39.787 [2024-07-16 00:56:57.622535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:39.787 [2024-07-16 00:56:57.622559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:39.787 [2024-07-16 00:56:57.622569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:39.787 [2024-07-16 00:56:57.622579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:39.787 [2024-07-16 00:56:57.622599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.787 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.632159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.632263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.632285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.632296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.632306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.632326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.642265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.642356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.642378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.642392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.642401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.642421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.652307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.652405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.652426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.652437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.652446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.652466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.662567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.662730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.662753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.662763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.662773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.662793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.672373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.672471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.672491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.672502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.672510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.672530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.682388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.682495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.682518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.682529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.682538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.682558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.692417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.047 [2024-07-16 00:56:57.692551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.047 [2024-07-16 00:56:57.692573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.047 [2024-07-16 00:56:57.692583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.047 [2024-07-16 00:56:57.692592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.047 [2024-07-16 00:56:57.692612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.047 qpair failed and we were unable to recover it. 00:30:40.047 [2024-07-16 00:56:57.702695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.702825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.702846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.702856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.702866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.702885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.712524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.712625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.712645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.712656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.712665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.712685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.722533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.722635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.722655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.722665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.722675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.722695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.732538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.732635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.732656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.732671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.732680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.732700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.742785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.742938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.742961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.742972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.742981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.743001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.752559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.752664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.752686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.752698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.752707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.752728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.762592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.762682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.762703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.762714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.762723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.762742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.772723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.772819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.772840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.772849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.772858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.772878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.782857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.782982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.783002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.783012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.783021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.783042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.792744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.792850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.792870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.792881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.792891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.792911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.802750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.802870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.802892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.802902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.802911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.802931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.812794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.812894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.812914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.812925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.812933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.812953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.822980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.823104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.823123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.823139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.823148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.823167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.832869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.832966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.832986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.832997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.833005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.048 [2024-07-16 00:56:57.833024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.048 qpair failed and we were unable to recover it. 00:30:40.048 [2024-07-16 00:56:57.842910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.048 [2024-07-16 00:56:57.843013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.048 [2024-07-16 00:56:57.843033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.048 [2024-07-16 00:56:57.843043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.048 [2024-07-16 00:56:57.843053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.049 [2024-07-16 00:56:57.843073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-07-16 00:56:57.852927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.049 [2024-07-16 00:56:57.853020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.049 [2024-07-16 00:56:57.853040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.049 [2024-07-16 00:56:57.853050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.049 [2024-07-16 00:56:57.853059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.049 [2024-07-16 00:56:57.853079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-07-16 00:56:57.863173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.049 [2024-07-16 00:56:57.863314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.049 [2024-07-16 00:56:57.863336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.049 [2024-07-16 00:56:57.863347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.049 [2024-07-16 00:56:57.863356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.049 [2024-07-16 00:56:57.863377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-07-16 00:56:57.872993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.049 [2024-07-16 00:56:57.873088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.049 [2024-07-16 00:56:57.873109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.049 [2024-07-16 00:56:57.873119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.049 [2024-07-16 00:56:57.873129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.049 [2024-07-16 00:56:57.873148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.049 [2024-07-16 00:56:57.882990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.049 [2024-07-16 00:56:57.883085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.049 [2024-07-16 00:56:57.883106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.049 [2024-07-16 00:56:57.883116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.049 [2024-07-16 00:56:57.883125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.049 [2024-07-16 00:56:57.883144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.049 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.893062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.893162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.893182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.893192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.893201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.893220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.903318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.903441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.903461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.903472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.903481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.903501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.913106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.913212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.913235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.913246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.913260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.913281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.923144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.923237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.923263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.923274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.923283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.923302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.933199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.933298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.933317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.933328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.933337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.933356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.943434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.309 [2024-07-16 00:56:57.943560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.309 [2024-07-16 00:56:57.943579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.309 [2024-07-16 00:56:57.943591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.309 [2024-07-16 00:56:57.943600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.309 [2024-07-16 00:56:57.943620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.309 qpair failed and we were unable to recover it. 00:30:40.309 [2024-07-16 00:56:57.953251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:57.953360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:57.953381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:57.953391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:57.953400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:57.953419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:57.963289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:57.963392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:57.963412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:57.963422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:57.963431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:57.963450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:57.973339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:57.973432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:57.973452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:57.973462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:57.973471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:57.973491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:57.983557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:57.983724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:57.983745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:57.983756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:57.983766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:57.983786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:57.993398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:57.993542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:57.993563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:57.993573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:57.993583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:57.993604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.003418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.003513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.003537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.003547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.003557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.003576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.013453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.013560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.013580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.013591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.013601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.013620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.023734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.023862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.023884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.023895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.023905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.023926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.033514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.033620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.033640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.033650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.033659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.033678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.043537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.043647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.043666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.043677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.043686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.043710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.053562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.053656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.053677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.053688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.053697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.053718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.063811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.063991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.064012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.064023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.064032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.064052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.073635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.073742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.073762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.073772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.073783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.073803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.083669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.083769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.083789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.310 [2024-07-16 00:56:58.083799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.310 [2024-07-16 00:56:58.083808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.310 [2024-07-16 00:56:58.083828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.310 qpair failed and we were unable to recover it. 00:30:40.310 [2024-07-16 00:56:58.093621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.310 [2024-07-16 00:56:58.093744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.310 [2024-07-16 00:56:58.093770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.093780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.093790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.093811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.311 [2024-07-16 00:56:58.103960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.311 [2024-07-16 00:56:58.104092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.311 [2024-07-16 00:56:58.104115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.104125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.104135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.104155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.311 [2024-07-16 00:56:58.113779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.311 [2024-07-16 00:56:58.113886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.311 [2024-07-16 00:56:58.113906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.113917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.113926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.113945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.311 [2024-07-16 00:56:58.123790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.311 [2024-07-16 00:56:58.123914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.311 [2024-07-16 00:56:58.123935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.123946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.123956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.123977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.311 [2024-07-16 00:56:58.133882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.311 [2024-07-16 00:56:58.133975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.311 [2024-07-16 00:56:58.133995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.134005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.134013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.134036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.311 [2024-07-16 00:56:58.144123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.311 [2024-07-16 00:56:58.144260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.311 [2024-07-16 00:56:58.144281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.311 [2024-07-16 00:56:58.144291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.311 [2024-07-16 00:56:58.144301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.311 [2024-07-16 00:56:58.144321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.311 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.153855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.153962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.153982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.153992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.154001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.154021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.163929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.164086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.164107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.164117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.164126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.164147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.173937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.174033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.174053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.174064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.174072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.174092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.184275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.184393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.184417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.184428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.184437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.184458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.194068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.194170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.194191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.194201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.194210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.194230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.204008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.204112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.204131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.204142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.204150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.204170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.214098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.214199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.214222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.214233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.214242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.214271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.224345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.224498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.224521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.224531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.224540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.224567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.234200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.234340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.234362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.234372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.234381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.234401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.572 [2024-07-16 00:56:58.244209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.572 [2024-07-16 00:56:58.244313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.572 [2024-07-16 00:56:58.244334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.572 [2024-07-16 00:56:58.244344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.572 [2024-07-16 00:56:58.244353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.572 [2024-07-16 00:56:58.244373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.572 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.254221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.254333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.254353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.254363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.254372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.254392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.264440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.264562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.264582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.264593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.264602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.264623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.274303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.274406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.274429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.274439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.274448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.274468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.284312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.284410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.284430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.284440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.284449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.284468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.294350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.294439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.294458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.294468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.294477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.294496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.304572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.304749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.304770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.304780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.304789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.304809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.314415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.314509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.314529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.314539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.314552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.314571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.324453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.324594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.324615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.324626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.324635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.324655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.334466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.334564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.334584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.334594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.334603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.334622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.344698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.344823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.344844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.344854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.344864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.344882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.354526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.354666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.354688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.354698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.354708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.354728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.364575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.364683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.364704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.364714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.364722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.364741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.374607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.374703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.374723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.374733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.374741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.573 [2024-07-16 00:56:58.374760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.573 qpair failed and we were unable to recover it. 00:30:40.573 [2024-07-16 00:56:58.384743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.573 [2024-07-16 00:56:58.384868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.573 [2024-07-16 00:56:58.384896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.573 [2024-07-16 00:56:58.384906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.573 [2024-07-16 00:56:58.384915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.574 [2024-07-16 00:56:58.384935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.574 qpair failed and we were unable to recover it. 00:30:40.574 [2024-07-16 00:56:58.394647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.574 [2024-07-16 00:56:58.394753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.574 [2024-07-16 00:56:58.394772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.574 [2024-07-16 00:56:58.394782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.574 [2024-07-16 00:56:58.394790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.574 [2024-07-16 00:56:58.394809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.574 qpair failed and we were unable to recover it. 00:30:40.574 [2024-07-16 00:56:58.404691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.574 [2024-07-16 00:56:58.404786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.574 [2024-07-16 00:56:58.404805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.574 [2024-07-16 00:56:58.404815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.574 [2024-07-16 00:56:58.404828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.574 [2024-07-16 00:56:58.404847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.574 qpair failed and we were unable to recover it. 00:30:40.834 [2024-07-16 00:56:58.414662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.834 [2024-07-16 00:56:58.414775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.834 [2024-07-16 00:56:58.414795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.834 [2024-07-16 00:56:58.414805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.834 [2024-07-16 00:56:58.414815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.834 [2024-07-16 00:56:58.414834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.834 qpair failed and we were unable to recover it. 00:30:40.834 [2024-07-16 00:56:58.424980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.834 [2024-07-16 00:56:58.425102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.834 [2024-07-16 00:56:58.425123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.834 [2024-07-16 00:56:58.425134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.834 [2024-07-16 00:56:58.425144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.834 [2024-07-16 00:56:58.425164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.834 qpair failed and we were unable to recover it. 00:30:40.834 [2024-07-16 00:56:58.434844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.834 [2024-07-16 00:56:58.434943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.834 [2024-07-16 00:56:58.434963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.834 [2024-07-16 00:56:58.434973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.834 [2024-07-16 00:56:58.434982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.834 [2024-07-16 00:56:58.435001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.834 qpair failed and we were unable to recover it. 00:30:40.834 [2024-07-16 00:56:58.444828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.834 [2024-07-16 00:56:58.444918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.444938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.444949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.444958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.444977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.454839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.454945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.454965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.454974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.454983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.455002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.465109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.465276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.465299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.465310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.465320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.465341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.474931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.475044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.475064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.475074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.475083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.475102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.485014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.485150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.485171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.485182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.485191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.485210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.494987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.495116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.495138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.495148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.495161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.495182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.505300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.505425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.505445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.505455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.505465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.505486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.515085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.515180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.515200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.515211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.515219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.515238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.525054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.525181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.525203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.525212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.525221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.525241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.535300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.535424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.535444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.535454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.535463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.535482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.545525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.545657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.545679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.545689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.545699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.545719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.555241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.555356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.555376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.555386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.555395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.555416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.565319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.565420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.565439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.565449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.565458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.565478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.575263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.575354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.575374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.575385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.835 [2024-07-16 00:56:58.575393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.835 [2024-07-16 00:56:58.575413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.835 qpair failed and we were unable to recover it. 00:30:40.835 [2024-07-16 00:56:58.585495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.835 [2024-07-16 00:56:58.585660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.835 [2024-07-16 00:56:58.585681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.835 [2024-07-16 00:56:58.585696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.585706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.585726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.595354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.595455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.595475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.595485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.595494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.595514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.605342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.605442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.605463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.605473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.605482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.605503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.615349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.615453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.615474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.615484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.615493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.615513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.625635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.625765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.625792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.625803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.625812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.625834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.635413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.635549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.635572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.635583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.635592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.635613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.645530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.645702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.645724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.645735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.645744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.645765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.655546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.655647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.655667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.655677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.655686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.655706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:40.836 [2024-07-16 00:56:58.665786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:40.836 [2024-07-16 00:56:58.665912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:40.836 [2024-07-16 00:56:58.665939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:40.836 [2024-07-16 00:56:58.665950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:40.836 [2024-07-16 00:56:58.665960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:40.836 [2024-07-16 00:56:58.665979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:40.836 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.675603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.675703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.675723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.675737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.675746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.675766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.685658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.685754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.685775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.685785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.685794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.685812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.695714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.695806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.695826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.695837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.695845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.695865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.705885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.706010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.706029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.706040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.706049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.706068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.715738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.715835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.715856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.715867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.715875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.715895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.725750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.725845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.725866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.725876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.097 [2024-07-16 00:56:58.725885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.097 [2024-07-16 00:56:58.725905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.097 qpair failed and we were unable to recover it. 00:30:41.097 [2024-07-16 00:56:58.735777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.097 [2024-07-16 00:56:58.735878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.097 [2024-07-16 00:56:58.735897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.097 [2024-07-16 00:56:58.735907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.735916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.735936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.746081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.746250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.746277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.746288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.746297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.746317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.755834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.755942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.755963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.755973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.755982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.756002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.765819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.765941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.765963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.765977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.765986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.766007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.775939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.776029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.776049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.776059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.776067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.776087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.786194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.786332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.786354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.786364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.786373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.786393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.795991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.796100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.796119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.796129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.796139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.796158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.805952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.806072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.806095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.806104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.806113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.806134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.816006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.816106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.816128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.816139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.816149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.816169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.826277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.826430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.826452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.826463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.826472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.826492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.836057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.836167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.836186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.836196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.836205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.836224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.846148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.846277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.846307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.846317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.846326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.846346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.856167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.856300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.856325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.856336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.856345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.856366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.866370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.866493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.866514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.866523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.866532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.866553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.876303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.098 [2024-07-16 00:56:58.876405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.098 [2024-07-16 00:56:58.876425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.098 [2024-07-16 00:56:58.876434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.098 [2024-07-16 00:56:58.876443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.098 [2024-07-16 00:56:58.876463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.098 qpair failed and we were unable to recover it. 00:30:41.098 [2024-07-16 00:56:58.886252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.099 [2024-07-16 00:56:58.886417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.099 [2024-07-16 00:56:58.886439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.099 [2024-07-16 00:56:58.886450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.099 [2024-07-16 00:56:58.886459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.099 [2024-07-16 00:56:58.886480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.099 qpair failed and we were unable to recover it. 00:30:41.099 [2024-07-16 00:56:58.896288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.099 [2024-07-16 00:56:58.896398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.099 [2024-07-16 00:56:58.896417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.099 [2024-07-16 00:56:58.896428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.099 [2024-07-16 00:56:58.896437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.099 [2024-07-16 00:56:58.896457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.099 qpair failed and we were unable to recover it. 00:30:41.099 [2024-07-16 00:56:58.906634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.099 [2024-07-16 00:56:58.906763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.099 [2024-07-16 00:56:58.906785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.099 [2024-07-16 00:56:58.906796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.099 [2024-07-16 00:56:58.906805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.099 [2024-07-16 00:56:58.906825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.099 qpair failed and we were unable to recover it. 00:30:41.099 [2024-07-16 00:56:58.916469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.099 [2024-07-16 00:56:58.916579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.099 [2024-07-16 00:56:58.916601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.099 [2024-07-16 00:56:58.916611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.099 [2024-07-16 00:56:58.916620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.099 [2024-07-16 00:56:58.916640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.099 qpair failed and we were unable to recover it. 00:30:41.099 [2024-07-16 00:56:58.926437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.099 [2024-07-16 00:56:58.926539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.099 [2024-07-16 00:56:58.926559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.099 [2024-07-16 00:56:58.926569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.099 [2024-07-16 00:56:58.926577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.099 [2024-07-16 00:56:58.926596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.099 qpair failed and we were unable to recover it. 00:30:41.359 [2024-07-16 00:56:58.936448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.359 [2024-07-16 00:56:58.936544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.359 [2024-07-16 00:56:58.936564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.359 [2024-07-16 00:56:58.936574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.359 [2024-07-16 00:56:58.936583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.359 [2024-07-16 00:56:58.936603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.359 qpair failed and we were unable to recover it. 00:30:41.359 [2024-07-16 00:56:58.946663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.359 [2024-07-16 00:56:58.946822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.359 [2024-07-16 00:56:58.946847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.359 [2024-07-16 00:56:58.946857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.359 [2024-07-16 00:56:58.946866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.359 [2024-07-16 00:56:58.946886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.359 qpair failed and we were unable to recover it. 00:30:41.359 [2024-07-16 00:56:58.956552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.359 [2024-07-16 00:56:58.956662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.359 [2024-07-16 00:56:58.956682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.359 [2024-07-16 00:56:58.956692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.359 [2024-07-16 00:56:58.956701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.359 [2024-07-16 00:56:58.956720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.359 qpair failed and we were unable to recover it. 00:30:41.359 [2024-07-16 00:56:58.966524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.359 [2024-07-16 00:56:58.966653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.359 [2024-07-16 00:56:58.966674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.359 [2024-07-16 00:56:58.966685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.359 [2024-07-16 00:56:58.966694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.359 [2024-07-16 00:56:58.966713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.359 qpair failed and we were unable to recover it. 00:30:41.359 [2024-07-16 00:56:58.976627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.359 [2024-07-16 00:56:58.976752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.359 [2024-07-16 00:56:58.976773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.359 [2024-07-16 00:56:58.976783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.359 [2024-07-16 00:56:58.976792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:58.976812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:58.986802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:58.986926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:58.986948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:58.986959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:58.986968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:58.986992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:58.996659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:58.996773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:58.996793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:58.996803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:58.996813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:58.996832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.006614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.006709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.006730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.006740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.006749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.006768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.016649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.016743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.016763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.016774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.016782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.016802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.026897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.027086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.027108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.027118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.027128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.027148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.036725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.036829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.036853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.036863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.036873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.036893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.046746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.046857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.046877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.046888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.046897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.046917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.056875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.056971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.056990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.057000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.057009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.057029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.067102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.067262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.067281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.067291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.067300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.067320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.076859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.076967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.076989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.077001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.077011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.077034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.086925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.087050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.087070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.087080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.087089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.087108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.096983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.097075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.097096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.097106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.097114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.097133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.107147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.107299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.107319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.107329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.107338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.107357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.116979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.360 [2024-07-16 00:56:59.117121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.360 [2024-07-16 00:56:59.117141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.360 [2024-07-16 00:56:59.117152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.360 [2024-07-16 00:56:59.117160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.360 [2024-07-16 00:56:59.117179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.360 qpair failed and we were unable to recover it. 00:30:41.360 [2024-07-16 00:56:59.127091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.127242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.127274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.127285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.127294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.127313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.137046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.137171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.137192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.137202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.137211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.137229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.147304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.147427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.147448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.147458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.147467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.147486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.157132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.157318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.157337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.157347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.157356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.157376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.167231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.167335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.167355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.167365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.167374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.167398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.177225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.177323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.177344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.177354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.177362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.177381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.361 [2024-07-16 00:56:59.187523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.361 [2024-07-16 00:56:59.187658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.361 [2024-07-16 00:56:59.187677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.361 [2024-07-16 00:56:59.187687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.361 [2024-07-16 00:56:59.187697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.361 [2024-07-16 00:56:59.187715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.361 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.197314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.621 [2024-07-16 00:56:59.197423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.621 [2024-07-16 00:56:59.197443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.621 [2024-07-16 00:56:59.197453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.621 [2024-07-16 00:56:59.197462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.621 [2024-07-16 00:56:59.197482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.621 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.207323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.621 [2024-07-16 00:56:59.207470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.621 [2024-07-16 00:56:59.207491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.621 [2024-07-16 00:56:59.207501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.621 [2024-07-16 00:56:59.207509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.621 [2024-07-16 00:56:59.207528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.621 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.217387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.621 [2024-07-16 00:56:59.217491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.621 [2024-07-16 00:56:59.217517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.621 [2024-07-16 00:56:59.217528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.621 [2024-07-16 00:56:59.217537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.621 [2024-07-16 00:56:59.217557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.621 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.227535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.621 [2024-07-16 00:56:59.227673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.621 [2024-07-16 00:56:59.227696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.621 [2024-07-16 00:56:59.227707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.621 [2024-07-16 00:56:59.227717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.621 [2024-07-16 00:56:59.227737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.621 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.237444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.621 [2024-07-16 00:56:59.237547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.621 [2024-07-16 00:56:59.237567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.621 [2024-07-16 00:56:59.237576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.621 [2024-07-16 00:56:59.237585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.621 [2024-07-16 00:56:59.237605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.621 qpair failed and we were unable to recover it. 00:30:41.621 [2024-07-16 00:56:59.247418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.247516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.247536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.247546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.247555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.247574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.257539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.257630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.257650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.257660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.257673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.257692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.267681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.267805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.267826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.267835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.267844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.267863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.277616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.277756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.277776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.277786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.277795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.277813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.287519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.287617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.287637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.287646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.287655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.287674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.297608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.297732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.297753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.297764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.297773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.297793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.307818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.307991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.308012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.308023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.308031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.308050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.317701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.317803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.317823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.317834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.317843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.317862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.327717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.327808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.327829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.327839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.327848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.327867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.337761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.337854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.337875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.337885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.337893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.337913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.347985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.348107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.348129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.348139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.348157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.348177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.357774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.357912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.357935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.357945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.357954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.357974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.367884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.367990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.368011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.368022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.368032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.368050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.377937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.378041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.378061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.622 [2024-07-16 00:56:59.378071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.622 [2024-07-16 00:56:59.378080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.622 [2024-07-16 00:56:59.378100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.622 qpair failed and we were unable to recover it. 00:30:41.622 [2024-07-16 00:56:59.388163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.622 [2024-07-16 00:56:59.388296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.622 [2024-07-16 00:56:59.388316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.388327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.388336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.388355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.397973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.398078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.398099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.398109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.398118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.398137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.407989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.408115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.408135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.408145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.408153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.408173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.418049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.418181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.418201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.418211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.418219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.418238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.428265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.428393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.428412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.428422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.428431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.428450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.438102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.438207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.438227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.438237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.438249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.438275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.448136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.448235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.448262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.448272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.448281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.448301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.623 [2024-07-16 00:56:59.458181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.623 [2024-07-16 00:56:59.458277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.623 [2024-07-16 00:56:59.458297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.623 [2024-07-16 00:56:59.458307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.623 [2024-07-16 00:56:59.458315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.623 [2024-07-16 00:56:59.458335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.623 qpair failed and we were unable to recover it. 00:30:41.883 [2024-07-16 00:56:59.468410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.883 [2024-07-16 00:56:59.468539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.883 [2024-07-16 00:56:59.468559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.883 [2024-07-16 00:56:59.468569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.883 [2024-07-16 00:56:59.468579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.883 [2024-07-16 00:56:59.468597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.883 qpair failed and we were unable to recover it. 00:30:41.883 [2024-07-16 00:56:59.478289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.883 [2024-07-16 00:56:59.478426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.883 [2024-07-16 00:56:59.478446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.883 [2024-07-16 00:56:59.478456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.883 [2024-07-16 00:56:59.478465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.883 [2024-07-16 00:56:59.478485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.883 qpair failed and we were unable to recover it. 00:30:41.883 [2024-07-16 00:56:59.488289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.883 [2024-07-16 00:56:59.488388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.883 [2024-07-16 00:56:59.488407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.883 [2024-07-16 00:56:59.488418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.883 [2024-07-16 00:56:59.488426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.883 [2024-07-16 00:56:59.488445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.498298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.498428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.498448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.498459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.498467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.498487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.508588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.508714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.508733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.508743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.508752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.508771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.518423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.518550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.518569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.518579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.518588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.518607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.528382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.528483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.528503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.528517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.528525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.528544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.538402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.538501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.538521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.538530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.538539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.538558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.548712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.548842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.548862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.548872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.548882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.548901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.558493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.558707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.558728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.558739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.558748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.558769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.568516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.568618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.568638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.568648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.568657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.568675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.578557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.578658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.578681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.578691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.578700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.578719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.588717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.588846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.588866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.588876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.588885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.588905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.598671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.598771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.598794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.598804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.598814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.598834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.608645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.608766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.608786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.608796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.608805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.608824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.618730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.618869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.618888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.618902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.618911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.618931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.628925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.629050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.884 [2024-07-16 00:56:59.629070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.884 [2024-07-16 00:56:59.629080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.884 [2024-07-16 00:56:59.629090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.884 [2024-07-16 00:56:59.629109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.884 qpair failed and we were unable to recover it. 00:30:41.884 [2024-07-16 00:56:59.638771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.884 [2024-07-16 00:56:59.638917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.638937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.638947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.638956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.638975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.648783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.648906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.648925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.648935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.648944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.648965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.658832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.658930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.658950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.658960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.658969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.658988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.669040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.669163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.669183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.669193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.669202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.669220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.678872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.678971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.678991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.679001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.679011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.679030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.688908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.689047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.689067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.689077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.689086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.689106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.699015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.699116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.699135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.699145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.699154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.699173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.709115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.709244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.709274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.709289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.709298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.709319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:41.885 [2024-07-16 00:56:59.719025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:41.885 [2024-07-16 00:56:59.719125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:41.885 [2024-07-16 00:56:59.719146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:41.885 [2024-07-16 00:56:59.719156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.885 [2024-07-16 00:56:59.719165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:41.885 [2024-07-16 00:56:59.719185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:41.885 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.729059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.729168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.729189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.729200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.729209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.146 [2024-07-16 00:56:59.729228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.146 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.739083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.739175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.739196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.739206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.739215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.146 [2024-07-16 00:56:59.739234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.146 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.749361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.749531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.749551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.749562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.749570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.146 [2024-07-16 00:56:59.749590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.146 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.759147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.759267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.759287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.759297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.759306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.146 [2024-07-16 00:56:59.759325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.146 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.769170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.769270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.769290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.769301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.769310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.146 [2024-07-16 00:56:59.769329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.146 qpair failed and we were unable to recover it. 00:30:42.146 [2024-07-16 00:56:59.779247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.146 [2024-07-16 00:56:59.779391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.146 [2024-07-16 00:56:59.779411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.146 [2024-07-16 00:56:59.779421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.146 [2024-07-16 00:56:59.779430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.779449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.789487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.789610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.789630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.789640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.789649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.789667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.799312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.799411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.799435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.799445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.799453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.799473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.809299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.809399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.809419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.809430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.809438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.809458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.819368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.819507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.819526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.819536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.819545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.819564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.829630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.829764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.829785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.829796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.829805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.829824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.839429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.839526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.839546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.839556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.839565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.839585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.849471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.849570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.849590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.849600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.849609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.849628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.859484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.859606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.859626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.859636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.859645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.859665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.869713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.869839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.869858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.869869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.869878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.869897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.879582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.879682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.879702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.879713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.879722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.879741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.889649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.889741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.889765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.889775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.889783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.889802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.899643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.899744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.899763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.899773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.899781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.899800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.909884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.910036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.910056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.910067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.910076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.910094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.147 [2024-07-16 00:56:59.919730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.147 [2024-07-16 00:56:59.919869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.147 [2024-07-16 00:56:59.919890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.147 [2024-07-16 00:56:59.919899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.147 [2024-07-16 00:56:59.919908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.147 [2024-07-16 00:56:59.919927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.147 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.929799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.929890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.929911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.929921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.929930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.929952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.939769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.939862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.939882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.939892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.939901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.939920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.950016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.950146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.950166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.950176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.950185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.950204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.959847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.959991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.960011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.960020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.960029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.960048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.969879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.969981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.970000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.970010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.970019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.970037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.148 [2024-07-16 00:56:59.979911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.148 [2024-07-16 00:56:59.980013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.148 [2024-07-16 00:56:59.980037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.148 [2024-07-16 00:56:59.980047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.148 [2024-07-16 00:56:59.980056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.148 [2024-07-16 00:56:59.980075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.148 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:56:59.990149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:56:59.990277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:56:59.990298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:56:59.990308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:56:59.990317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:56:59.990336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:56:59.999971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.000098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.000118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.000128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.000137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.000157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.010069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.010181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.010203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.010213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.010221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.010240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.020034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.020143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.020164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.020174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.020183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.020206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.030364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.030518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.030538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.030548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.030558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.030579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.040144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.040263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.040288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.040299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.040308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.040330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.050123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.050276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.050297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.050307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.050317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.050337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.060212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.060320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.060341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.060351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.060360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.060380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.070450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.070596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.070619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.070630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.070639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.070658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.080263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.080362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.080384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.080394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.080404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.080425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.090265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.090363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.090384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.090394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.090403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.090423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.100331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.100444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.100469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.100482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.100491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.100511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.110436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.409 [2024-07-16 00:57:00.110568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.409 [2024-07-16 00:57:00.110588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.409 [2024-07-16 00:57:00.110599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.409 [2024-07-16 00:57:00.110608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.409 [2024-07-16 00:57:00.110634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.409 qpair failed and we were unable to recover it. 00:30:42.409 [2024-07-16 00:57:00.120284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.120391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.120411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.120421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.120430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.120449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.130367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.130472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.130492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.130503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.130511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.130530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.140396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.140501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.140520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.140530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.140539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.140558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.150633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.150752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.150772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.150782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.150791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.150810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.160444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.160546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.160571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.160581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.160589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.160609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.170524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.170668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.170689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.170699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.170708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.170727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.180493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.180581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.180602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.180612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.180621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.180640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.190761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.190883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.190902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.190913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.190922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.190941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.200628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.200725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.200745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.200755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.200769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.200788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.210627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.210728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.210749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.210759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.210768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.210787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.220681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.220779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.220801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.220812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.220822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.220842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.230894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.231090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.231112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.231122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.231132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.231152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.410 [2024-07-16 00:57:00.240708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.410 [2024-07-16 00:57:00.240815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.410 [2024-07-16 00:57:00.240836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.410 [2024-07-16 00:57:00.240847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.410 [2024-07-16 00:57:00.240855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.410 [2024-07-16 00:57:00.240876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.410 qpair failed and we were unable to recover it. 00:30:42.670 [2024-07-16 00:57:00.250787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.670 [2024-07-16 00:57:00.250928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.670 [2024-07-16 00:57:00.250949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.670 [2024-07-16 00:57:00.250959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.670 [2024-07-16 00:57:00.250968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.670 [2024-07-16 00:57:00.250987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.670 qpair failed and we were unable to recover it. 00:30:42.670 [2024-07-16 00:57:00.260781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.670 [2024-07-16 00:57:00.260881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.670 [2024-07-16 00:57:00.260901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.670 [2024-07-16 00:57:00.260911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.670 [2024-07-16 00:57:00.260920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.670 [2024-07-16 00:57:00.260940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.670 qpair failed and we were unable to recover it. 00:30:42.670 [2024-07-16 00:57:00.271078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.670 [2024-07-16 00:57:00.271228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.670 [2024-07-16 00:57:00.271248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.670 [2024-07-16 00:57:00.271265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.670 [2024-07-16 00:57:00.271275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.670 [2024-07-16 00:57:00.271295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.670 qpair failed and we were unable to recover it. 00:30:42.670 [2024-07-16 00:57:00.280855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.670 [2024-07-16 00:57:00.280989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.670 [2024-07-16 00:57:00.281009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.670 [2024-07-16 00:57:00.281019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.670 [2024-07-16 00:57:00.281028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.670 [2024-07-16 00:57:00.281047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.670 qpair failed and we were unable to recover it. 00:30:42.670 [2024-07-16 00:57:00.290817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.670 [2024-07-16 00:57:00.290911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.670 [2024-07-16 00:57:00.290931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.670 [2024-07-16 00:57:00.290941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.670 [2024-07-16 00:57:00.290954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.670 [2024-07-16 00:57:00.290974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.300911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.301015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.301037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.301047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.301057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.301077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.311178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.311314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.311334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.311345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.311354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.311373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.320995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.321109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.321129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.321139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.321148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.321167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.331055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.331154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.331176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.331186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.331195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.331216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.341037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.341150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.341174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.341185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.341195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.341215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.671 [2024-07-16 00:57:00.351286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.671 [2024-07-16 00:57:00.351410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.671 [2024-07-16 00:57:00.351430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.671 [2024-07-16 00:57:00.351441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.671 [2024-07-16 00:57:00.351450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.671 [2024-07-16 00:57:00.351469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.671 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.361083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.361192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.361211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.361222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.361230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.361249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.371103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.371229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.371249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.371268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.371277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.371298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.381173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.381284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.381306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.381317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.381329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.381350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.391445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.391585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.391606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.391615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.391625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.391644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.401219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.401324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.401344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.401354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.401363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.401383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.411305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.411399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.411420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.411430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.411439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.411458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.421356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.421453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.421473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.421484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.421492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.421511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.431519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.431681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.431701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.431712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.431721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.431740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.441280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.441400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.441421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.441430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.441439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.441459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.451317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.451419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.451439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.451449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.451458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.451478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.461451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.461597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.461618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.461629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.461639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.461658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.471671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.471800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.471821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.471836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.471845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.471865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.481426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.481553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.481575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.481585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.481595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.481615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.699 qpair failed and we were unable to recover it. 00:30:42.699 [2024-07-16 00:57:00.491503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.699 [2024-07-16 00:57:00.491593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.699 [2024-07-16 00:57:00.491614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.699 [2024-07-16 00:57:00.491624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.699 [2024-07-16 00:57:00.491633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.699 [2024-07-16 00:57:00.491653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.700 qpair failed and we were unable to recover it. 00:30:42.700 [2024-07-16 00:57:00.501494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.700 [2024-07-16 00:57:00.501588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.700 [2024-07-16 00:57:00.501608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.700 [2024-07-16 00:57:00.501618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.700 [2024-07-16 00:57:00.501626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.700 [2024-07-16 00:57:00.501646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.700 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.511724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.511882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.511903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.511914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.511923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.511943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.521604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.521732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.521752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.521762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.521771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.521791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.531631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.531729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.531749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.531759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.531768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.531787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.541684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.541805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.541827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.541838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.541846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.541866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.551861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.552023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.552044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.552054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.552062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.552081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.561717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.561819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.561840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.561854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.561864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.561883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.571788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.571931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.571952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.571963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.571972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.571992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.581814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.581935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.581957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.581968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.581977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.581997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.592085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.592238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.592266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.592277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.592287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.592307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.601855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.601969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.601991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.602001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.602010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.602029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.611948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.612057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.612080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.612091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.612100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.612120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.621954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.622086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.960 [2024-07-16 00:57:00.622108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.960 [2024-07-16 00:57:00.622119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.960 [2024-07-16 00:57:00.622127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.960 [2024-07-16 00:57:00.622146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.960 qpair failed and we were unable to recover it. 00:30:42.960 [2024-07-16 00:57:00.632110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.960 [2024-07-16 00:57:00.632232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.632252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.632271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.632279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.632299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.642015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.642110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.642131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.642141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.642150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.642169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.652056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.652207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.652227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.652241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.652250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.652279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.662070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.662174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.662195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.662205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.662214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.662234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.672371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.672495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.672515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.672526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.672535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.672555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.682142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.682284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.682305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.682315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.682323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.682343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.692192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.692298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.692320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.692330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.692340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.692361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.702225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.702346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.702369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.702379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.702388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.702409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.712507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.712631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.712651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.712662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.712671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.712690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.722320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.722425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.722446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.722456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.722465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.722485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.732388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.732517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.732537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.732548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.732558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.732578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.742366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.742465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.742486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.742501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.742510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.742531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.752623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.752801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.752821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.752833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.752841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.752861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.762493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.762602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.762623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.961 [2024-07-16 00:57:00.762634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.961 [2024-07-16 00:57:00.762643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.961 [2024-07-16 00:57:00.762663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.961 qpair failed and we were unable to recover it. 00:30:42.961 [2024-07-16 00:57:00.772385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.961 [2024-07-16 00:57:00.772479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.961 [2024-07-16 00:57:00.772500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.962 [2024-07-16 00:57:00.772511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.962 [2024-07-16 00:57:00.772519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.962 [2024-07-16 00:57:00.772539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.962 qpair failed and we were unable to recover it. 00:30:42.962 [2024-07-16 00:57:00.782421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.962 [2024-07-16 00:57:00.782566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.962 [2024-07-16 00:57:00.782587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.962 [2024-07-16 00:57:00.782598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.962 [2024-07-16 00:57:00.782607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.962 [2024-07-16 00:57:00.782627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.962 qpair failed and we were unable to recover it. 00:30:42.962 [2024-07-16 00:57:00.792723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:42.962 [2024-07-16 00:57:00.792846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:42.962 [2024-07-16 00:57:00.792867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:42.962 [2024-07-16 00:57:00.792878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:42.962 [2024-07-16 00:57:00.792886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:42.962 [2024-07-16 00:57:00.792906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:42.962 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.802497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.802594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.802617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.802634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.802648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.802669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.812587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.812694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.812717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.812728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.812737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.812757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.822596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.822715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.822737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.822748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.822757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.822777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.832874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.833021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.833047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.833058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.833067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.833089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.842592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.842691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.842713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.842724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.842732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.842751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.852708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.852818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.852840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.852850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.852859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.852880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.862650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.862749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.862770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.862780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.862789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.862808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.872990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.873117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.873138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.873149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.873158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.873182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.882788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.882890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.882917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.882930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.882939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.882960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.892852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.892956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.892978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.892988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.892997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.893030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.902859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.902969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.902990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.903001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.903009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.903029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.913076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.913227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.913248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.913266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.913276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.913296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.922934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.923065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.923092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.223 [2024-07-16 00:57:00.923102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.223 [2024-07-16 00:57:00.923111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.223 [2024-07-16 00:57:00.923131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.223 qpair failed and we were unable to recover it. 00:30:43.223 [2024-07-16 00:57:00.932993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.223 [2024-07-16 00:57:00.933095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.223 [2024-07-16 00:57:00.933116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.933126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.933135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.933154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.942997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.943091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.943112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.943123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.943131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.943151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.953251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.953437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.953461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.953472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.953480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.953500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.962994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.963158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.963179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.963190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.963198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.963223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.973067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.973158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.973179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.973190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.973199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.973218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.983112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.983261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.983281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.983291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.983301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.983320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:00.993281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:00.993404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:00.993425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:00.993435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:00.993444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:00.993464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.003179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.003290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.003311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.003322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.003331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.003350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.013211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.013309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.013334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.013345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.013353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.013373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.023219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.023336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.023357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.023367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.023376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.023395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.033471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.033598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.033618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.033628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.033636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.033656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.043314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.043416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.043437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.043449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.043462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.043483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.224 [2024-07-16 00:57:01.053306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.224 [2024-07-16 00:57:01.053407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.224 [2024-07-16 00:57:01.053429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.224 [2024-07-16 00:57:01.053439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.224 [2024-07-16 00:57:01.053448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.224 [2024-07-16 00:57:01.053476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.224 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.063321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.063422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.063444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.063454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.063463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.063483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.073586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.073744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.073765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.073775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.073784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.073804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.083445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.083582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.083602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.083613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.083622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.083641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.093415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.093506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.093526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.093536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.093545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.093564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.103455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.103545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.103571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.103581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.103590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.103610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.113692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.113814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.113835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.113845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.113855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.113874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.123552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.123649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.123670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.123682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.123695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.123716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.133580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.133688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.485 [2024-07-16 00:57:01.133715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.485 [2024-07-16 00:57:01.133726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.485 [2024-07-16 00:57:01.133735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.485 [2024-07-16 00:57:01.133755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.485 qpair failed and we were unable to recover it. 00:30:43.485 [2024-07-16 00:57:01.143596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.485 [2024-07-16 00:57:01.143686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.143708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.143719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.143732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.143752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.153912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.154048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.154068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.154079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.154088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.154107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.163648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.163758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.163779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.163790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.163799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.163819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.173698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.173828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.173849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.173860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.173869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.173889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.183712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.183801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.183822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.183833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.183842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.183862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.193984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.194116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.194136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.194146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.194155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.194175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.203718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.203828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.203850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.203860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.203869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.203889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.213833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.213936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.213960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.213970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.213980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.214000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.223868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.223977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.223997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.224008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.224016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.224036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.234056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.234189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.234212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.234223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.234237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.234270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.243933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.244035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.244056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.244066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.244076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.244095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.253972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.254061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.254082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.254092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.254101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.254120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.263991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.264104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.264124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.264134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.264144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.264163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.274294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.486 [2024-07-16 00:57:01.274495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.486 [2024-07-16 00:57:01.274524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.486 [2024-07-16 00:57:01.274535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.486 [2024-07-16 00:57:01.274546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.486 [2024-07-16 00:57:01.274566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.486 qpair failed and we were unable to recover it. 00:30:43.486 [2024-07-16 00:57:01.284068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.487 [2024-07-16 00:57:01.284179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.487 [2024-07-16 00:57:01.284199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.487 [2024-07-16 00:57:01.284209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.487 [2024-07-16 00:57:01.284218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.487 [2024-07-16 00:57:01.284237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.487 qpair failed and we were unable to recover it. 00:30:43.487 [2024-07-16 00:57:01.294100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.487 [2024-07-16 00:57:01.294196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.487 [2024-07-16 00:57:01.294217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.487 [2024-07-16 00:57:01.294228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.487 [2024-07-16 00:57:01.294236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.487 [2024-07-16 00:57:01.294267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.487 qpair failed and we were unable to recover it. 00:30:43.487 [2024-07-16 00:57:01.304119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.487 [2024-07-16 00:57:01.304232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.487 [2024-07-16 00:57:01.304261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.487 [2024-07-16 00:57:01.304274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.487 [2024-07-16 00:57:01.304282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.487 [2024-07-16 00:57:01.304303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.487 qpair failed and we were unable to recover it. 00:30:43.487 [2024-07-16 00:57:01.314369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.487 [2024-07-16 00:57:01.314541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.487 [2024-07-16 00:57:01.314563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.487 [2024-07-16 00:57:01.314573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.487 [2024-07-16 00:57:01.314582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.487 [2024-07-16 00:57:01.314603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.487 qpair failed and we were unable to recover it. 00:30:43.746 [2024-07-16 00:57:01.324174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.324295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.324317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.324328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.324341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.324361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.334148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.334269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.334290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.334300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.334309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.334328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.344172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.344274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.344295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.344306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.344315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.344334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.354456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.354619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.354639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.354649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.354659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.354677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.364331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.364432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.364452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.364463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.364472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.364491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.374408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.374559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.374580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.374590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.374599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.374618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.384370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.384495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.384518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.384528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.384537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.384557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.394658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.394803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.394824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.394835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.394844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.394864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.404407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.404508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.404528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.404539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.404547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.404567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.414512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.414610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.414631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.414645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.414654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.414673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.424453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.424547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.424568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.424578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.424587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.424606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.434751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.434869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.434890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.434900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.434909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.434929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.444594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.444699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.444719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.444729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.444738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.444757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.747 [2024-07-16 00:57:01.454625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:43.747 [2024-07-16 00:57:01.454716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:43.747 [2024-07-16 00:57:01.454736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:43.747 [2024-07-16 00:57:01.454746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:43.747 [2024-07-16 00:57:01.454755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xeaafd0 00:30:43.747 [2024-07-16 00:57:01.454774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:43.747 qpair failed and we were unable to recover it. 00:30:43.748 [2024-07-16 00:57:01.454871] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:43.748 A controller has encountered a failure and is being reset. 00:30:43.748 Controller properly reset. 00:30:43.748 Initializing NVMe Controllers 00:30:43.748 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:43.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:43.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:43.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:43.748 Initialization complete. Launching workers. 00:30:43.748 Starting thread on core 1 00:30:43.748 Starting thread on core 2 00:30:43.748 Starting thread on core 3 00:30:43.748 Starting thread on core 0 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:44.007 00:30:44.007 real 0m11.461s 00:30:44.007 user 0m21.617s 00:30:44.007 sys 0m4.320s 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:44.007 ************************************ 00:30:44.007 END TEST nvmf_target_disconnect_tc2 00:30:44.007 ************************************ 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:44.007 rmmod nvme_tcp 00:30:44.007 rmmod nvme_fabrics 00:30:44.007 rmmod nvme_keyring 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3225532 ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3225532 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3225532 ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3225532 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3225532 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3225532' 00:30:44.007 killing process with pid 3225532 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3225532 00:30:44.007 00:57:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3225532 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.267 00:57:02 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.839 00:57:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:46.839 00:30:46.839 real 0m20.103s 00:30:46.839 user 0m48.809s 00:30:46.839 sys 0m9.197s 00:30:46.839 00:57:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.839 00:57:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 ************************************ 00:30:46.839 END TEST nvmf_target_disconnect 00:30:46.839 ************************************ 00:30:46.839 00:57:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:46.839 00:57:04 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:46.839 00:57:04 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.839 00:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 00:57:04 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:46.839 00:30:46.839 real 23m29.518s 00:30:46.839 user 52m1.464s 00:30:46.839 sys 6m45.118s 00:30:46.839 00:57:04 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.839 00:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 ************************************ 00:30:46.839 END TEST nvmf_tcp 00:30:46.839 ************************************ 00:30:46.839 00:57:04 -- common/autotest_common.sh@1142 -- # return 0 00:30:46.839 00:57:04 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:46.839 00:57:04 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:46.839 00:57:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:46.839 00:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.839 00:57:04 -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 ************************************ 00:30:46.839 START TEST spdkcli_nvmf_tcp 00:30:46.839 ************************************ 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:46.839 * Looking for test storage... 00:30:46.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3227384 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3227384 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3227384 ']' 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.839 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:46.839 [2024-07-16 00:57:04.465041] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:30:46.839 [2024-07-16 00:57:04.465097] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227384 ] 00:30:46.839 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.839 [2024-07-16 00:57:04.547089] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:46.839 [2024-07-16 00:57:04.638623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.839 [2024-07-16 00:57:04.638628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.099 00:57:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:47.099 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:47.099 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:47.099 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:47.099 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:47.099 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:47.099 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:47.099 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:47.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:47.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:47.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.100 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.100 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:47.100 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:47.100 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:47.100 ' 00:30:50.389 [2024-07-16 00:57:07.480660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.325 [2024-07-16 00:57:08.805401] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:53.856 [2024-07-16 00:57:11.261699] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:55.761 [2024-07-16 00:57:13.376899] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:57.139 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:57.139 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:57.139 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:57.139 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:57.139 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:57.139 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:57.139 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:57.398 00:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.963 00:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:57.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:57.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:57.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:57.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:57.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:57.963 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:57.963 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:57.963 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:57.963 ' 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:04.521 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:04.521 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:04.521 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:04.521 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3227384 ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3227384' 00:31:04.521 killing process with pid 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3227384 ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3227384 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3227384 ']' 00:31:04.521 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3227384 00:31:04.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3227384) - No such process 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3227384 is not found' 00:31:04.522 Process with pid 3227384 is not found 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:04.522 00:31:04.522 real 0m17.233s 00:31:04.522 user 0m37.846s 00:31:04.522 sys 0m0.972s 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:04.522 00:57:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.522 ************************************ 00:31:04.522 END TEST spdkcli_nvmf_tcp 00:31:04.522 ************************************ 00:31:04.522 00:57:21 -- common/autotest_common.sh@1142 -- # return 0 00:31:04.522 00:57:21 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:04.522 00:57:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:04.522 00:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:04.522 00:57:21 -- common/autotest_common.sh@10 -- # set +x 00:31:04.522 ************************************ 00:31:04.522 START TEST nvmf_identify_passthru 00:31:04.522 ************************************ 00:31:04.522 00:57:21 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:04.522 * Looking for test storage... 00:31:04.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:04.522 00:57:21 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.522 00:57:21 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:04.522 00:57:21 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.522 00:57:21 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.522 00:57:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.522 00:57:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:04.522 00:57:21 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:04.522 00:57:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:09.798 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:09.798 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.798 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:09.799 Found net devices under 0000:af:00.0: cvl_0_0 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:09.799 Found net devices under 0000:af:00.1: cvl_0_1 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:31:09.799 00:31:09.799 --- 10.0.0.2 ping statistics --- 00:31:09.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.799 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:09.799 00:31:09.799 --- 10.0.0.1 ping statistics --- 00:31:09.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.799 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.799 00:57:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.799 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:09.799 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:09.799 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:10.059 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:10.059 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:86:00.0 00:31:10.059 00:57:27 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:86:00.0 00:31:10.059 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:86:00.0 00:31:10.059 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:86:00.0 ']' 00:31:10.059 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:31:10.059 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:10.059 00:57:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:10.059 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.252 00:57:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ916308MR1P0FGN 00:31:14.253 00:57:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:86:00.0' -i 0 00:31:14.253 00:57:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:14.253 00:57:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:14.253 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3235475 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:18.439 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3235475 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3235475 ']' 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:18.439 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.698 [2024-07-16 00:57:36.309482] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:31:18.698 [2024-07-16 00:57:36.309540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.698 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.698 [2024-07-16 00:57:36.399890] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:18.698 [2024-07-16 00:57:36.491825] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.698 [2024-07-16 00:57:36.491870] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.698 [2024-07-16 00:57:36.491881] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.698 [2024-07-16 00:57:36.491890] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.698 [2024-07-16 00:57:36.491898] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.698 [2024-07-16 00:57:36.491953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.698 [2024-07-16 00:57:36.491983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.698 [2024-07-16 00:57:36.492095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:18.698 [2024-07-16 00:57:36.492095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:18.957 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.957 INFO: Log level set to 20 00:31:18.957 INFO: Requests: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "method": "nvmf_set_config", 00:31:18.957 "id": 1, 00:31:18.957 "params": { 00:31:18.957 "admin_cmd_passthru": { 00:31:18.957 "identify_ctrlr": true 00:31:18.957 } 00:31:18.957 } 00:31:18.957 } 00:31:18.957 00:31:18.957 INFO: response: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "id": 1, 00:31:18.957 "result": true 00:31:18.957 } 00:31:18.957 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.957 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.957 INFO: Setting log level to 20 00:31:18.957 INFO: Setting log level to 20 00:31:18.957 INFO: Log level set to 20 00:31:18.957 INFO: Log level set to 20 00:31:18.957 INFO: Requests: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "method": "framework_start_init", 00:31:18.957 "id": 1 00:31:18.957 } 00:31:18.957 00:31:18.957 INFO: Requests: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "method": "framework_start_init", 00:31:18.957 "id": 1 00:31:18.957 } 00:31:18.957 00:31:18.957 [2024-07-16 00:57:36.661193] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:18.957 INFO: response: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "id": 1, 00:31:18.957 "result": true 00:31:18.957 } 00:31:18.957 00:31:18.957 INFO: response: 00:31:18.957 { 00:31:18.957 "jsonrpc": "2.0", 00:31:18.957 "id": 1, 00:31:18.957 "result": true 00:31:18.957 } 00:31:18.957 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.957 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.957 INFO: Setting log level to 40 00:31:18.957 INFO: Setting log level to 40 00:31:18.957 INFO: Setting log level to 40 00:31:18.957 [2024-07-16 00:57:36.674889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.957 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:18.957 00:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:86:00.0 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.957 00:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.247 Nvme0n1 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.247 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.247 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.247 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.248 [2024-07-16 00:57:39.606564] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.248 [ 00:31:22.248 { 00:31:22.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:22.248 "subtype": "Discovery", 00:31:22.248 "listen_addresses": [], 00:31:22.248 "allow_any_host": true, 00:31:22.248 "hosts": [] 00:31:22.248 }, 00:31:22.248 { 00:31:22.248 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.248 "subtype": "NVMe", 00:31:22.248 "listen_addresses": [ 00:31:22.248 { 00:31:22.248 "trtype": "TCP", 00:31:22.248 "adrfam": "IPv4", 00:31:22.248 "traddr": "10.0.0.2", 00:31:22.248 "trsvcid": "4420" 00:31:22.248 } 00:31:22.248 ], 00:31:22.248 "allow_any_host": true, 00:31:22.248 "hosts": [], 00:31:22.248 "serial_number": "SPDK00000000000001", 00:31:22.248 "model_number": "SPDK bdev Controller", 00:31:22.248 "max_namespaces": 1, 00:31:22.248 "min_cntlid": 1, 00:31:22.248 "max_cntlid": 65519, 00:31:22.248 "namespaces": [ 00:31:22.248 { 00:31:22.248 "nsid": 1, 00:31:22.248 "bdev_name": "Nvme0n1", 00:31:22.248 "name": "Nvme0n1", 00:31:22.248 "nguid": "54707A11172D4B9681FF7B153E2D6C58", 00:31:22.248 "uuid": "54707a11-172d-4b96-81ff-7b153e2d6c58" 00:31:22.248 } 00:31:22.248 ] 00:31:22.248 } 00:31:22.248 ] 00:31:22.248 00:57:39 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:22.248 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ916308MR1P0FGN 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:22.248 00:57:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:22.248 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ916308MR1P0FGN '!=' BTLJ916308MR1P0FGN ']' 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.506 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.506 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:22.506 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:22.506 00:57:40 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:22.506 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.506 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:22.506 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.506 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:22.506 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.507 rmmod nvme_tcp 00:31:22.507 rmmod nvme_fabrics 00:31:22.507 rmmod nvme_keyring 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3235475 ']' 00:31:22.507 00:57:40 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3235475 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3235475 ']' 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3235475 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235475 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235475' 00:31:22.507 killing process with pid 3235475 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3235475 00:31:22.507 00:57:40 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3235475 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.410 00:57:41 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.410 00:57:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:24.410 00:57:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.312 00:57:43 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:26.312 00:31:26.312 real 0m22.274s 00:31:26.312 user 0m28.687s 00:31:26.312 sys 0m5.312s 00:31:26.312 00:57:43 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:26.312 00:57:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.312 ************************************ 00:31:26.312 END TEST nvmf_identify_passthru 00:31:26.312 ************************************ 00:31:26.312 00:57:43 -- common/autotest_common.sh@1142 -- # return 0 00:31:26.312 00:57:43 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:26.312 00:57:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:26.312 00:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.312 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.312 ************************************ 00:31:26.312 START TEST nvmf_dif 00:31:26.312 ************************************ 00:31:26.312 00:57:43 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:26.312 * Looking for test storage... 00:31:26.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.312 00:57:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.312 00:57:44 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.312 00:57:44 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.312 00:57:44 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.312 00:57:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.312 00:57:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.312 00:57:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.312 00:57:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:26.312 00:57:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.312 00:57:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:26.313 00:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:26.313 00:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:26.313 00:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:26.313 00:57:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:26.313 00:57:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.313 00:57:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:26.313 00:57:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:26.313 00:57:44 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:26.313 00:57:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:32.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:32.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:32.882 Found net devices under 0000:af:00.0: cvl_0_0 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:32.882 Found net devices under 0000:af:00.1: cvl_0_1 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:32.882 00:57:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:32.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:31:32.883 00:31:32.883 --- 10.0.0.2 ping statistics --- 00:31:32.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.883 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:31:32.883 00:31:32.883 --- 10.0.0.1 ping statistics --- 00:31:32.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.883 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:32.883 00:57:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:34.787 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:86:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:34.787 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:34.787 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:35.045 00:57:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:35.045 00:57:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3241051 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:35.045 00:57:52 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3241051 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3241051 ']' 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:35.045 00:57:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:35.045 [2024-07-16 00:57:52.789370] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:31:35.045 [2024-07-16 00:57:52.789432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.045 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.045 [2024-07-16 00:57:52.878140] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.304 [2024-07-16 00:57:52.968242] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.304 [2024-07-16 00:57:52.968291] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.304 [2024-07-16 00:57:52.968301] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.304 [2024-07-16 00:57:52.968310] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.304 [2024-07-16 00:57:52.968318] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.304 [2024-07-16 00:57:52.968345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.925 00:57:53 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.925 00:57:53 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:35.925 00:57:53 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:35.925 00:57:53 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:35.925 00:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.182 00:57:53 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.182 00:57:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:36.182 00:57:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.182 [2024-07-16 00:57:53.769903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.182 00:57:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.182 00:57:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.182 ************************************ 00:31:36.182 START TEST fio_dif_1_default 00:31:36.182 ************************************ 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:36.182 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:36.183 bdev_null0 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:36.183 [2024-07-16 00:57:53.842217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.183 { 00:31:36.183 "params": { 00:31:36.183 "name": "Nvme$subsystem", 00:31:36.183 "trtype": "$TEST_TRANSPORT", 00:31:36.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.183 "adrfam": "ipv4", 00:31:36.183 "trsvcid": "$NVMF_PORT", 00:31:36.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.183 "hdgst": ${hdgst:-false}, 00:31:36.183 "ddgst": ${ddgst:-false} 00:31:36.183 }, 00:31:36.183 "method": "bdev_nvme_attach_controller" 00:31:36.183 } 00:31:36.183 EOF 00:31:36.183 )") 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:36.183 "params": { 00:31:36.183 "name": "Nvme0", 00:31:36.183 "trtype": "tcp", 00:31:36.183 "traddr": "10.0.0.2", 00:31:36.183 "adrfam": "ipv4", 00:31:36.183 "trsvcid": "4420", 00:31:36.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.183 "hdgst": false, 00:31:36.183 "ddgst": false 00:31:36.183 }, 00:31:36.183 "method": "bdev_nvme_attach_controller" 00:31:36.183 }' 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:36.183 00:57:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.440 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:36.440 fio-3.35 00:31:36.440 Starting 1 thread 00:31:36.699 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.901 00:31:48.901 filename0: (groupid=0, jobs=1): err= 0: pid=3241710: Tue Jul 16 00:58:04 2024 00:31:48.901 read: IOPS=189, BW=759KiB/s (778kB/s)(7600KiB/10009msec) 00:31:48.901 slat (nsec): min=9141, max=69812, avg=9532.76, stdev=1724.86 00:31:48.901 clat (usec): min=652, max=45639, avg=21044.90, stdev=20213.23 00:31:48.901 lat (usec): min=661, max=45671, avg=21054.43, stdev=20213.23 00:31:48.901 clat percentiles (usec): 00:31:48.901 | 1.00th=[ 709], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 799], 00:31:48.901 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[41157], 60.00th=[41157], 00:31:48.901 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:48.901 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:31:48.901 | 99.99th=[45876] 00:31:48.901 bw ( KiB/s): min= 704, max= 768, per=99.83%, avg=758.40, stdev=21.02, samples=20 00:31:48.901 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:31:48.901 lat (usec) : 750=10.95%, 1000=38.95% 00:31:48.901 lat (msec) : 50=50.11% 00:31:48.901 cpu : usr=94.60%, sys=5.10%, ctx=13, majf=0, minf=258 00:31:48.901 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.901 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.901 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:48.901 00:31:48.901 Run status group 0 (all jobs): 00:31:48.901 READ: bw=759KiB/s (778kB/s), 759KiB/s-759KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10009-10009msec 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:48.901 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:31:48.902 real 0m11.336s 00:31:48.902 user 0m22.042s 00:31:48.902 sys 0m0.910s 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 ************************************ 00:31:48.902 END TEST fio_dif_1_default 00:31:48.902 ************************************ 00:31:48.902 00:58:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:48.902 00:58:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:48.902 00:58:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:48.902 00:58:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 ************************************ 00:31:48.902 START TEST fio_dif_1_multi_subsystems 00:31:48.902 ************************************ 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 bdev_null0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 [2024-07-16 00:58:05.249340] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 bdev_null1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:48.902 { 00:31:48.902 "params": { 00:31:48.902 "name": "Nvme$subsystem", 00:31:48.902 "trtype": "$TEST_TRANSPORT", 00:31:48.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.902 "adrfam": "ipv4", 00:31:48.902 "trsvcid": "$NVMF_PORT", 00:31:48.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.902 "hdgst": ${hdgst:-false}, 00:31:48.902 "ddgst": ${ddgst:-false} 00:31:48.902 }, 00:31:48.902 "method": "bdev_nvme_attach_controller" 00:31:48.902 } 00:31:48.902 EOF 00:31:48.902 )") 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:48.902 { 00:31:48.902 "params": { 00:31:48.902 "name": "Nvme$subsystem", 00:31:48.902 "trtype": "$TEST_TRANSPORT", 00:31:48.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.902 "adrfam": "ipv4", 00:31:48.902 "trsvcid": "$NVMF_PORT", 00:31:48.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.902 "hdgst": ${hdgst:-false}, 00:31:48.902 "ddgst": ${ddgst:-false} 00:31:48.902 }, 00:31:48.902 "method": "bdev_nvme_attach_controller" 00:31:48.902 } 00:31:48.902 EOF 00:31:48.902 )") 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:48.902 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:48.903 "params": { 00:31:48.903 "name": "Nvme0", 00:31:48.903 "trtype": "tcp", 00:31:48.903 "traddr": "10.0.0.2", 00:31:48.903 "adrfam": "ipv4", 00:31:48.903 "trsvcid": "4420", 00:31:48.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:48.903 "hdgst": false, 00:31:48.903 "ddgst": false 00:31:48.903 }, 00:31:48.903 "method": "bdev_nvme_attach_controller" 00:31:48.903 },{ 00:31:48.903 "params": { 00:31:48.903 "name": "Nvme1", 00:31:48.903 "trtype": "tcp", 00:31:48.903 "traddr": "10.0.0.2", 00:31:48.903 "adrfam": "ipv4", 00:31:48.903 "trsvcid": "4420", 00:31:48.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:48.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:48.903 "hdgst": false, 00:31:48.903 "ddgst": false 00:31:48.903 }, 00:31:48.903 "method": "bdev_nvme_attach_controller" 00:31:48.903 }' 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:48.903 00:58:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.903 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:48.903 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:48.903 fio-3.35 00:31:48.903 Starting 2 threads 00:31:48.903 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.994 00:31:58.994 filename0: (groupid=0, jobs=1): err= 0: pid=3243715: Tue Jul 16 00:58:16 2024 00:31:58.994 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10032msec) 00:31:58.994 slat (nsec): min=9179, max=33897, avg=11179.53, stdev=2954.46 00:31:58.994 clat (usec): min=40783, max=43089, avg=41420.92, stdev=581.63 00:31:58.994 lat (usec): min=40792, max=43105, avg=41432.10, stdev=581.78 00:31:58.994 clat percentiles (usec): 00:31:58.994 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:58.994 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:31:58.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:58.994 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:58.994 | 99.99th=[43254] 00:31:58.994 bw ( KiB/s): min= 352, max= 416, per=33.71%, avg=385.60, stdev=12.61, samples=20 00:31:58.994 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:31:58.994 lat (msec) : 50=100.00% 00:31:58.994 cpu : usr=97.07%, sys=2.64%, ctx=12, majf=0, minf=73 00:31:58.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.994 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:58.994 filename1: (groupid=0, jobs=1): err= 0: pid=3243716: Tue Jul 16 00:58:16 2024 00:31:58.994 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10006msec) 00:31:58.994 slat (nsec): min=9218, max=35338, avg=10442.91, stdev=2312.51 00:31:58.994 clat (usec): min=692, max=42281, avg=21079.64, stdev=20197.96 00:31:58.994 lat (usec): min=702, max=42291, avg=21090.08, stdev=20197.24 00:31:58.994 clat percentiles (usec): 00:31:58.994 | 1.00th=[ 709], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 807], 00:31:58.994 | 30.00th=[ 816], 40.00th=[ 832], 50.00th=[41157], 60.00th=[41157], 00:31:58.994 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:58.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:58.994 | 99.99th=[42206] 00:31:58.994 bw ( KiB/s): min= 672, max= 768, per=66.20%, avg=756.80, stdev=28.00, samples=20 00:31:58.994 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:31:58.994 lat (usec) : 750=13.13%, 1000=36.02% 00:31:58.994 lat (msec) : 2=0.63%, 50=50.21% 00:31:58.994 cpu : usr=97.22%, sys=2.48%, ctx=27, majf=0, minf=175 00:31:58.994 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.994 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.994 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:58.994 00:31:58.994 Run status group 0 (all jobs): 00:31:58.994 READ: bw=1142KiB/s (1169kB/s), 386KiB/s-758KiB/s (395kB/s-776kB/s), io=11.2MiB (11.7MB), run=10006-10032msec 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:31:58.994 real 0m11.376s 00:31:58.994 user 0m31.253s 00:31:58.994 sys 0m0.857s 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 ************************************ 00:31:58.994 END TEST fio_dif_1_multi_subsystems 00:31:58.994 ************************************ 00:31:58.994 00:58:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:58.994 00:58:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:58.994 00:58:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:58.994 00:58:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 ************************************ 00:31:58.994 START TEST fio_dif_rand_params 00:31:58.994 ************************************ 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 bdev_null0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:58.994 [2024-07-16 00:58:16.702061] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.994 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:58.995 { 00:31:58.995 "params": { 00:31:58.995 "name": "Nvme$subsystem", 00:31:58.995 "trtype": "$TEST_TRANSPORT", 00:31:58.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.995 "adrfam": "ipv4", 00:31:58.995 "trsvcid": "$NVMF_PORT", 00:31:58.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.995 "hdgst": ${hdgst:-false}, 00:31:58.995 "ddgst": ${ddgst:-false} 00:31:58.995 }, 00:31:58.995 "method": "bdev_nvme_attach_controller" 00:31:58.995 } 00:31:58.995 EOF 00:31:58.995 )") 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:58.995 "params": { 00:31:58.995 "name": "Nvme0", 00:31:58.995 "trtype": "tcp", 00:31:58.995 "traddr": "10.0.0.2", 00:31:58.995 "adrfam": "ipv4", 00:31:58.995 "trsvcid": "4420", 00:31:58.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.995 "hdgst": false, 00:31:58.995 "ddgst": false 00:31:58.995 }, 00:31:58.995 "method": "bdev_nvme_attach_controller" 00:31:58.995 }' 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:58.995 00:58:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.596 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:59.597 ... 00:31:59.597 fio-3.35 00:31:59.597 Starting 3 threads 00:31:59.597 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.862 00:32:04.862 filename0: (groupid=0, jobs=1): err= 0: pid=3245823: Tue Jul 16 00:58:22 2024 00:32:04.862 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5007msec) 00:32:04.862 slat (nsec): min=9847, max=81633, avg=28044.66, stdev=4368.72 00:32:04.862 clat (usec): min=5339, max=55112, avg=14559.42, stdev=9529.52 00:32:04.862 lat (usec): min=5361, max=55143, avg=14587.46, stdev=9529.22 00:32:04.862 clat percentiles (usec): 00:32:04.862 | 1.00th=[ 6194], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10552], 00:32:04.862 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[13042], 00:32:04.862 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16188], 95.00th=[49021], 00:32:04.862 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[55313], 00:32:04.862 | 99.99th=[55313] 00:32:04.862 bw ( KiB/s): min=21248, max=31488, per=32.82%, avg=26265.60, stdev=3249.84, samples=10 00:32:04.862 iops : min= 166, max= 246, avg=205.20, stdev=25.39, samples=10 00:32:04.862 lat (msec) : 10=15.74%, 20=78.43%, 50=1.17%, 100=4.66% 00:32:04.862 cpu : usr=93.35%, sys=6.05%, ctx=10, majf=0, minf=113 00:32:04.862 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 issued rwts: total=1029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.862 filename0: (groupid=0, jobs=1): err= 0: pid=3245824: Tue Jul 16 00:58:22 2024 00:32:04.862 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(134MiB/5005msec) 00:32:04.862 slat (nsec): min=9824, max=72837, avg=28559.21, stdev=4275.95 00:32:04.862 clat (usec): min=5040, max=55684, avg=14022.86, stdev=6653.79 00:32:04.862 lat (usec): min=5061, max=55715, avg=14051.42, stdev=6653.55 00:32:04.862 clat percentiles (usec): 00:32:04.862 | 1.00th=[ 5800], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[10159], 00:32:04.862 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13435], 60.00th=[14222], 00:32:04.862 | 70.00th=[15139], 80.00th=[16319], 90.00th=[17433], 95.00th=[18482], 00:32:04.862 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55837], 00:32:04.862 | 99.99th=[55837] 00:32:04.862 bw ( KiB/s): min=21760, max=32256, per=34.10%, avg=27289.60, stdev=2835.84, samples=10 00:32:04.862 iops : min= 170, max= 252, avg=213.20, stdev=22.16, samples=10 00:32:04.862 lat (msec) : 10=17.04%, 20=80.71%, 50=0.28%, 100=1.97% 00:32:04.862 cpu : usr=93.75%, sys=5.64%, ctx=6, majf=0, minf=114 00:32:04.862 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.862 filename0: (groupid=0, jobs=1): err= 0: pid=3245825: Tue Jul 16 00:58:22 2024 00:32:04.862 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(129MiB/5004msec) 00:32:04.862 slat (usec): min=10, max=210, avg=28.21, stdev= 6.99 00:32:04.862 clat (usec): min=4965, max=58629, avg=14495.97, stdev=8775.87 00:32:04.862 lat (usec): min=4987, max=58660, avg=14524.19, stdev=8776.26 00:32:04.862 clat percentiles (usec): 00:32:04.862 | 1.00th=[ 5800], 5.00th=[ 7570], 10.00th=[ 8586], 20.00th=[10159], 00:32:04.862 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:32:04.862 | 70.00th=[14484], 80.00th=[15270], 90.00th=[16909], 95.00th=[19268], 00:32:04.862 | 99.00th=[54789], 99.50th=[55313], 99.90th=[57934], 99.95th=[58459], 00:32:04.862 | 99.99th=[58459] 00:32:04.862 bw ( KiB/s): min=22272, max=28672, per=32.99%, avg=26398.70, stdev=1904.48, samples=10 00:32:04.862 iops : min= 174, max= 224, avg=206.20, stdev=14.89, samples=10 00:32:04.862 lat (msec) : 10=18.68%, 20=76.77%, 50=0.48%, 100=4.07% 00:32:04.862 cpu : usr=94.08%, sys=5.32%, ctx=7, majf=0, minf=142 00:32:04.862 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:04.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:04.862 issued rwts: total=1033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:04.862 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:04.862 00:32:04.862 Run status group 0 (all jobs): 00:32:04.862 READ: bw=78.1MiB/s (81.9MB/s), 25.7MiB/s-26.7MiB/s (26.9MB/s-28.0MB/s), io=391MiB (410MB), run=5004-5007msec 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 bdev_null0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 [2024-07-16 00:58:22.903242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 bdev_null1 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.121 bdev_null2 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.121 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.380 { 00:32:05.380 "params": { 00:32:05.380 "name": "Nvme$subsystem", 00:32:05.380 "trtype": "$TEST_TRANSPORT", 00:32:05.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.380 "adrfam": "ipv4", 00:32:05.380 "trsvcid": "$NVMF_PORT", 00:32:05.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.380 "hdgst": ${hdgst:-false}, 00:32:05.380 "ddgst": ${ddgst:-false} 00:32:05.380 }, 00:32:05.380 "method": "bdev_nvme_attach_controller" 00:32:05.380 } 00:32:05.380 EOF 00:32:05.380 )") 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.380 { 00:32:05.380 "params": { 00:32:05.380 "name": "Nvme$subsystem", 00:32:05.380 "trtype": "$TEST_TRANSPORT", 00:32:05.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.380 "adrfam": "ipv4", 00:32:05.380 "trsvcid": "$NVMF_PORT", 00:32:05.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.380 "hdgst": ${hdgst:-false}, 00:32:05.380 "ddgst": ${ddgst:-false} 00:32:05.380 }, 00:32:05.380 "method": "bdev_nvme_attach_controller" 00:32:05.380 } 00:32:05.380 EOF 00:32:05.380 )") 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.380 { 00:32:05.380 "params": { 00:32:05.380 "name": "Nvme$subsystem", 00:32:05.380 "trtype": "$TEST_TRANSPORT", 00:32:05.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.380 "adrfam": "ipv4", 00:32:05.380 "trsvcid": "$NVMF_PORT", 00:32:05.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.380 "hdgst": ${hdgst:-false}, 00:32:05.380 "ddgst": ${ddgst:-false} 00:32:05.380 }, 00:32:05.380 "method": "bdev_nvme_attach_controller" 00:32:05.380 } 00:32:05.380 EOF 00:32:05.380 )") 00:32:05.380 00:58:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:05.380 00:58:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:05.380 00:58:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:05.380 00:58:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:05.380 "params": { 00:32:05.380 "name": "Nvme0", 00:32:05.380 "trtype": "tcp", 00:32:05.380 "traddr": "10.0.0.2", 00:32:05.380 "adrfam": "ipv4", 00:32:05.380 "trsvcid": "4420", 00:32:05.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.380 "hdgst": false, 00:32:05.380 "ddgst": false 00:32:05.380 }, 00:32:05.380 "method": "bdev_nvme_attach_controller" 00:32:05.380 },{ 00:32:05.380 "params": { 00:32:05.380 "name": "Nvme1", 00:32:05.380 "trtype": "tcp", 00:32:05.380 "traddr": "10.0.0.2", 00:32:05.380 "adrfam": "ipv4", 00:32:05.380 "trsvcid": "4420", 00:32:05.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.381 "hdgst": false, 00:32:05.381 "ddgst": false 00:32:05.381 }, 00:32:05.381 "method": "bdev_nvme_attach_controller" 00:32:05.381 },{ 00:32:05.381 "params": { 00:32:05.381 "name": "Nvme2", 00:32:05.381 "trtype": "tcp", 00:32:05.381 "traddr": "10.0.0.2", 00:32:05.381 "adrfam": "ipv4", 00:32:05.381 "trsvcid": "4420", 00:32:05.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:05.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:05.381 "hdgst": false, 00:32:05.381 "ddgst": false 00:32:05.381 }, 00:32:05.381 "method": "bdev_nvme_attach_controller" 00:32:05.381 }' 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:05.381 00:58:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.640 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:05.640 ... 00:32:05.640 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:05.640 ... 00:32:05.640 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:05.640 ... 00:32:05.640 fio-3.35 00:32:05.640 Starting 24 threads 00:32:05.640 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.843 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247148: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=428, BW=1716KiB/s (1757kB/s)(16.8MiB/10034msec) 00:32:17.843 slat (nsec): min=9749, max=53513, avg=14849.00, stdev=5560.07 00:32:17.843 clat (usec): min=4635, max=38904, avg=37173.96, stdev=3871.48 00:32:17.843 lat (usec): min=4649, max=38920, avg=37188.81, stdev=3870.62 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[ 7570], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:32:17.843 | 99.99th=[39060] 00:32:17.843 bw ( KiB/s): min= 1660, max= 2176, per=4.22%, avg=1714.35, stdev=120.31, samples=20 00:32:17.843 iops : min= 415, max= 544, avg=428.55, stdev=30.09, samples=20 00:32:17.843 lat (msec) : 10=1.49%, 50=98.51% 00:32:17.843 cpu : usr=98.61%, sys=0.98%, ctx=9, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247149: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10012msec) 00:32:17.843 slat (nsec): min=9885, max=70300, avg=29758.44, stdev=11362.00 00:32:17.843 clat (usec): min=22779, max=38828, avg=37551.39, stdev=983.36 00:32:17.843 lat (usec): min=22815, max=38855, avg=37581.15, stdev=982.54 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[35914], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:32:17.843 | 99.99th=[39060] 00:32:17.843 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1690.47, stdev=53.36, samples=19 00:32:17.843 iops : min= 415, max= 448, avg=422.58, stdev=13.36, samples=19 00:32:17.843 lat (msec) : 50=100.00% 00:32:17.843 cpu : usr=98.64%, sys=0.95%, ctx=14, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247150: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10009msec) 00:32:17.843 slat (nsec): min=6393, max=52795, avg=26063.82, stdev=8305.81 00:32:17.843 clat (usec): min=18641, max=68272, avg=37666.52, stdev=2230.57 00:32:17.843 lat (usec): min=18658, max=68290, avg=37692.58, stdev=2229.97 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[68682], 99.95th=[68682], 00:32:17.843 | 99.99th=[68682] 00:32:17.843 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.58, stdev=64.41, samples=19 00:32:17.843 iops : min= 384, max= 448, avg=420.89, stdev=16.10, samples=19 00:32:17.843 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.843 cpu : usr=98.61%, sys=0.98%, ctx=14, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247151: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=430, BW=1723KiB/s (1764kB/s)(16.9MiB/10034msec) 00:32:17.843 slat (usec): min=5, max=111, avg=17.63, stdev= 5.59 00:32:17.843 clat (usec): min=4789, max=63087, avg=36937.17, stdev=4669.71 00:32:17.843 lat (usec): min=4815, max=63101, avg=36954.80, stdev=4669.76 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[ 9241], 5.00th=[33424], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.843 | 99.00th=[38536], 99.50th=[57410], 99.90th=[63177], 99.95th=[63177], 00:32:17.843 | 99.99th=[63177] 00:32:17.843 bw ( KiB/s): min= 1660, max= 2048, per=4.25%, avg=1725.20, stdev=102.01, samples=20 00:32:17.843 iops : min= 415, max= 512, avg=431.30, stdev=25.50, samples=20 00:32:17.843 lat (msec) : 10=1.11%, 20=0.65%, 50=97.73%, 100=0.51% 00:32:17.843 cpu : usr=98.36%, sys=1.22%, ctx=11, majf=0, minf=9 00:32:17.843 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247152: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=427, BW=1708KiB/s (1749kB/s)(16.7MiB/10004msec) 00:32:17.843 slat (nsec): min=9787, max=62171, avg=25988.94, stdev=6470.42 00:32:17.843 clat (usec): min=6999, max=58666, avg=37210.34, stdev=3289.67 00:32:17.843 lat (usec): min=7009, max=58687, avg=37236.33, stdev=3290.05 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[ 8979], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:32:17.843 | 99.99th=[58459] 00:32:17.843 bw ( KiB/s): min= 1660, max= 1920, per=4.21%, avg=1710.32, stdev=77.01, samples=19 00:32:17.843 iops : min= 415, max= 480, avg=427.58, stdev=19.25, samples=19 00:32:17.843 lat (msec) : 10=1.12%, 20=0.05%, 50=98.78%, 100=0.05% 00:32:17.843 cpu : usr=98.09%, sys=1.40%, ctx=12, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247153: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10002msec) 00:32:17.843 slat (nsec): min=9854, max=99780, avg=43135.09, stdev=14968.40 00:32:17.843 clat (usec): min=17927, max=76967, avg=37457.96, stdev=2256.08 00:32:17.843 lat (usec): min=17949, max=76992, avg=37501.09, stdev=2255.84 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[66323], 99.95th=[66323], 00:32:17.843 | 99.99th=[77071] 00:32:17.843 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1682.95, stdev=64.19, samples=19 00:32:17.843 iops : min= 384, max= 448, avg=420.74, stdev=16.05, samples=19 00:32:17.843 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.843 cpu : usr=98.23%, sys=1.26%, ctx=13, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247154: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:32:17.843 slat (usec): min=9, max=116, avg=44.59, stdev=16.09 00:32:17.843 clat (usec): min=35196, max=55363, avg=37472.23, stdev=1157.87 00:32:17.843 lat (usec): min=35229, max=55389, avg=37516.82, stdev=1157.89 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[36963], 00:32:17.843 | 30.00th=[36963], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38011], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[55313], 99.95th=[55313], 00:32:17.843 | 99.99th=[55313] 00:32:17.843 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1684.11, stdev=63.46, samples=19 00:32:17.843 iops : min= 384, max= 448, avg=420.95, stdev=15.97, samples=19 00:32:17.843 lat (msec) : 50=99.62%, 100=0.38% 00:32:17.843 cpu : usr=98.20%, sys=1.29%, ctx=13, majf=0, minf=9 00:32:17.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.843 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.843 filename0: (groupid=0, jobs=1): err= 0: pid=3247155: Tue Jul 16 00:58:34 2024 00:32:17.843 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10007msec) 00:32:17.843 slat (nsec): min=4505, max=64657, avg=25421.13, stdev=8641.32 00:32:17.843 clat (usec): min=31239, max=51252, avg=37697.61, stdev=936.20 00:32:17.843 lat (usec): min=31251, max=51265, avg=37723.03, stdev=934.75 00:32:17.843 clat percentiles (usec): 00:32:17.843 | 1.00th=[36963], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:17.843 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.843 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.843 | 99.00th=[38536], 99.50th=[38536], 99.90th=[51119], 99.95th=[51119], 00:32:17.843 | 99.99th=[51119] 00:32:17.843 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.95, stdev=64.29, samples=19 00:32:17.843 iops : min= 384, max= 448, avg=420.95, stdev=16.08, samples=19 00:32:17.843 lat (msec) : 50=99.62%, 100=0.38% 00:32:17.843 cpu : usr=98.00%, sys=1.57%, ctx=21, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247156: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=426, BW=1704KiB/s (1745kB/s)(16.7MiB/10027msec) 00:32:17.844 slat (nsec): min=9576, max=69263, avg=22697.36, stdev=8571.21 00:32:17.844 clat (usec): min=6719, max=38989, avg=37373.31, stdev=2633.47 00:32:17.844 lat (usec): min=6740, max=39005, avg=37396.01, stdev=2633.76 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[23200], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:32:17.844 | 99.99th=[39060] 00:32:17.844 bw ( KiB/s): min= 1660, max= 1920, per=4.19%, avg=1701.95, stdev=73.02, samples=20 00:32:17.844 iops : min= 415, max= 480, avg=425.45, stdev=18.27, samples=20 00:32:17.844 lat (msec) : 10=0.37%, 20=0.59%, 50=99.04% 00:32:17.844 cpu : usr=98.59%, sys=1.02%, ctx=10, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247157: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10012msec) 00:32:17.844 slat (nsec): min=6250, max=54500, avg=26627.22, stdev=8124.76 00:32:17.844 clat (usec): min=18639, max=71146, avg=37676.71, stdev=2381.50 00:32:17.844 lat (usec): min=18658, max=71165, avg=37703.34, stdev=2380.75 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[70779], 99.95th=[70779], 00:32:17.844 | 99.99th=[70779] 00:32:17.844 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.58, stdev=64.41, samples=19 00:32:17.844 iops : min= 384, max= 448, avg=420.89, stdev=16.10, samples=19 00:32:17.844 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.844 cpu : usr=98.73%, sys=0.87%, ctx=12, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247158: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10005msec) 00:32:17.844 slat (nsec): min=4642, max=78900, avg=37968.35, stdev=10710.19 00:32:17.844 clat (usec): min=18050, max=68563, avg=37550.31, stdev=2272.90 00:32:17.844 lat (usec): min=18089, max=68576, avg=37588.28, stdev=2271.83 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[68682], 99.95th=[68682], 00:32:17.844 | 99.99th=[68682] 00:32:17.844 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.37, stdev=64.05, samples=19 00:32:17.844 iops : min= 384, max= 448, avg=420.84, stdev=16.01, samples=19 00:32:17.844 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.844 cpu : usr=98.57%, sys=1.04%, ctx=11, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247159: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10012msec) 00:32:17.844 slat (nsec): min=4894, max=70165, avg=36532.05, stdev=10442.53 00:32:17.844 clat (usec): min=22833, max=48019, avg=37477.10, stdev=1069.65 00:32:17.844 lat (usec): min=22844, max=48044, avg=37513.63, stdev=1070.21 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[35390], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[47449], 00:32:17.844 | 99.99th=[47973] 00:32:17.844 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1690.32, stdev=53.44, samples=19 00:32:17.844 iops : min= 415, max= 448, avg=422.58, stdev=13.36, samples=19 00:32:17.844 lat (msec) : 50=100.00% 00:32:17.844 cpu : usr=98.11%, sys=1.47%, ctx=18, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247160: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10003msec) 00:32:17.844 slat (nsec): min=5153, max=79108, avg=37760.45, stdev=10530.12 00:32:17.844 clat (usec): min=17956, max=67550, avg=37566.25, stdev=2224.67 00:32:17.844 lat (usec): min=18000, max=67563, avg=37604.01, stdev=2223.41 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[67634], 99.95th=[67634], 00:32:17.844 | 99.99th=[67634] 00:32:17.844 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1683.53, stdev=63.66, samples=19 00:32:17.844 iops : min= 384, max= 448, avg=420.84, stdev=16.01, samples=19 00:32:17.844 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.844 cpu : usr=98.70%, sys=0.91%, ctx=14, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247161: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=423, BW=1693KiB/s (1734kB/s)(16.5MiB/10002msec) 00:32:17.844 slat (nsec): min=5266, max=71658, avg=30390.30, stdev=13522.11 00:32:17.844 clat (usec): min=18075, max=77754, avg=37534.46, stdev=3669.97 00:32:17.844 lat (usec): min=18091, max=77773, avg=37564.85, stdev=3669.42 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[26346], 5.00th=[35390], 10.00th=[36963], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.844 | 99.00th=[48497], 99.50th=[61080], 99.90th=[78119], 99.95th=[78119], 00:32:17.844 | 99.99th=[78119] 00:32:17.844 bw ( KiB/s): min= 1532, max= 1808, per=4.15%, avg=1686.95, stdev=65.25, samples=19 00:32:17.844 iops : min= 383, max= 452, avg=421.74, stdev=16.31, samples=19 00:32:17.844 lat (msec) : 20=0.38%, 50=99.01%, 100=0.61% 00:32:17.844 cpu : usr=98.68%, sys=0.94%, ctx=11, majf=0, minf=9 00:32:17.844 IO depths : 1=5.0%, 2=10.1%, 4=20.7%, 8=56.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=93.1%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247162: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10007msec) 00:32:17.844 slat (nsec): min=5556, max=53076, avg=24437.88, stdev=8441.32 00:32:17.844 clat (usec): min=29654, max=55217, avg=37709.65, stdev=1147.47 00:32:17.844 lat (usec): min=29690, max=55231, avg=37734.09, stdev=1146.64 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[33817], 5.00th=[37487], 10.00th=[37487], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[38011], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.844 | 99.00th=[42206], 99.50th=[42730], 99.90th=[51119], 99.95th=[51119], 00:32:17.844 | 99.99th=[55313] 00:32:17.844 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1684.11, stdev=63.91, samples=19 00:32:17.844 iops : min= 384, max= 448, avg=420.95, stdev=16.08, samples=19 00:32:17.844 lat (msec) : 50=99.62%, 100=0.38% 00:32:17.844 cpu : usr=98.42%, sys=1.18%, ctx=16, majf=0, minf=9 00:32:17.844 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.844 filename1: (groupid=0, jobs=1): err= 0: pid=3247163: Tue Jul 16 00:58:34 2024 00:32:17.844 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10012msec) 00:32:17.844 slat (nsec): min=9760, max=72446, avg=35495.65, stdev=11590.73 00:32:17.844 clat (usec): min=22722, max=47603, avg=37500.23, stdev=1031.49 00:32:17.844 lat (usec): min=22740, max=47619, avg=37535.72, stdev=1031.16 00:32:17.844 clat percentiles (usec): 00:32:17.844 | 1.00th=[35914], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.844 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.844 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.844 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[39060], 00:32:17.844 | 99.99th=[47449] 00:32:17.844 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1690.47, stdev=53.36, samples=19 00:32:17.844 iops : min= 415, max= 448, avg=422.58, stdev=13.36, samples=19 00:32:17.844 lat (msec) : 50=100.00% 00:32:17.844 cpu : usr=98.55%, sys=1.05%, ctx=11, majf=0, minf=9 00:32:17.844 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247164: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=422, BW=1689KiB/s (1730kB/s)(16.5MiB/10003msec) 00:32:17.845 slat (nsec): min=4819, max=71420, avg=37720.20, stdev=10870.01 00:32:17.845 clat (usec): min=18058, max=78040, avg=37541.07, stdev=2585.95 00:32:17.845 lat (usec): min=18087, max=78054, avg=37578.79, stdev=2585.04 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[78119], 99.95th=[78119], 00:32:17.845 | 99.99th=[78119] 00:32:17.845 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1683.53, stdev=63.66, samples=19 00:32:17.845 iops : min= 384, max= 448, avg=420.84, stdev=16.01, samples=19 00:32:17.845 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.845 cpu : usr=98.71%, sys=0.90%, ctx=12, majf=0, minf=10 00:32:17.845 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247165: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10012msec) 00:32:17.845 slat (nsec): min=8943, max=75429, avg=26737.04, stdev=10295.98 00:32:17.845 clat (usec): min=22750, max=38824, avg=37572.79, stdev=968.79 00:32:17.845 lat (usec): min=22767, max=38842, avg=37599.53, stdev=967.94 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[35914], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[39060], 00:32:17.845 | 99.99th=[39060] 00:32:17.845 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1690.47, stdev=53.36, samples=19 00:32:17.845 iops : min= 415, max= 448, avg=422.58, stdev=13.36, samples=19 00:32:17.845 lat (msec) : 50=100.00% 00:32:17.845 cpu : usr=98.70%, sys=0.89%, ctx=10, majf=0, minf=9 00:32:17.845 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247166: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10010msec) 00:32:17.845 slat (nsec): min=4506, max=50491, avg=25879.76, stdev=7722.80 00:32:17.845 clat (usec): min=31054, max=51543, avg=37689.82, stdev=1027.83 00:32:17.845 lat (usec): min=31065, max=51558, avg=37715.70, stdev=1026.84 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38536], 00:32:17.845 | 99.00th=[38536], 99.50th=[42730], 99.90th=[51643], 99.95th=[51643], 00:32:17.845 | 99.99th=[51643] 00:32:17.845 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.95, stdev=64.29, samples=19 00:32:17.845 iops : min= 384, max= 448, avg=420.95, stdev=16.08, samples=19 00:32:17.845 lat (msec) : 50=99.62%, 100=0.38% 00:32:17.845 cpu : usr=98.33%, sys=1.27%, ctx=18, majf=0, minf=9 00:32:17.845 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247167: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=428, BW=1715KiB/s (1757kB/s)(16.8MiB/10003msec) 00:32:17.845 slat (nsec): min=8054, max=53899, avg=16143.67, stdev=8148.01 00:32:17.845 clat (msec): min=8, max=100, avg=37.24, stdev= 5.57 00:32:17.845 lat (msec): min=8, max=100, avg=37.26, stdev= 5.57 00:32:17.845 clat percentiles (msec): 00:32:17.845 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 32], 20.00th=[ 36], 00:32:17.845 | 30.00th=[ 38], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39], 00:32:17.845 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 47], 00:32:17.845 | 99.00th=[ 50], 99.50th=[ 61], 99.90th=[ 79], 99.95th=[ 79], 00:32:17.845 | 99.99th=[ 102] 00:32:17.845 bw ( KiB/s): min= 1458, max= 1756, per=4.20%, avg=1703.68, stdev=63.96, samples=19 00:32:17.845 iops : min= 364, max= 439, avg=425.89, stdev=16.10, samples=19 00:32:17.845 lat (msec) : 10=0.14%, 20=0.61%, 50=98.60%, 100=0.61%, 250=0.05% 00:32:17.845 cpu : usr=98.75%, sys=0.85%, ctx=9, majf=0, minf=9 00:32:17.845 IO depths : 1=0.1%, 2=0.2%, 4=2.6%, 8=80.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=89.0%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247168: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=421, BW=1688KiB/s (1728kB/s)(16.5MiB/10011msec) 00:32:17.845 slat (nsec): min=6426, max=55959, avg=26556.25, stdev=7944.63 00:32:17.845 clat (usec): min=18682, max=68474, avg=37670.48, stdev=2232.90 00:32:17.845 lat (usec): min=18716, max=68491, avg=37697.04, stdev=2232.25 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[68682], 99.95th=[68682], 00:32:17.845 | 99.99th=[68682] 00:32:17.845 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.58, stdev=64.41, samples=19 00:32:17.845 iops : min= 384, max= 448, avg=420.89, stdev=16.10, samples=19 00:32:17.845 lat (msec) : 20=0.33%, 50=99.29%, 100=0.38% 00:32:17.845 cpu : usr=98.47%, sys=1.13%, ctx=24, majf=0, minf=9 00:32:17.845 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247169: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=422, BW=1689KiB/s (1729kB/s)(16.5MiB/10004msec) 00:32:17.845 slat (nsec): min=5044, max=76177, avg=37938.07, stdev=10650.63 00:32:17.845 clat (usec): min=17963, max=68697, avg=37566.23, stdev=2304.49 00:32:17.845 lat (usec): min=17988, max=68710, avg=37604.17, stdev=2303.17 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[36439], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[68682], 99.95th=[68682], 00:32:17.845 | 99.99th=[68682] 00:32:17.845 bw ( KiB/s): min= 1536, max= 1792, per=4.15%, avg=1683.37, stdev=64.05, samples=19 00:32:17.845 iops : min= 384, max= 448, avg=420.84, stdev=16.01, samples=19 00:32:17.845 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:32:17.845 cpu : usr=98.68%, sys=0.92%, ctx=14, majf=0, minf=9 00:32:17.845 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247170: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=423, BW=1694KiB/s (1735kB/s)(16.6MiB/10012msec) 00:32:17.845 slat (nsec): min=7659, max=69499, avg=35404.73, stdev=11627.87 00:32:17.845 clat (usec): min=22564, max=48395, avg=37502.70, stdev=1032.61 00:32:17.845 lat (usec): min=22574, max=48411, avg=37538.10, stdev=1032.25 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[35914], 5.00th=[36963], 10.00th=[36963], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:32:17.845 | 99.99th=[48497] 00:32:17.845 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1690.32, stdev=53.44, samples=19 00:32:17.845 iops : min= 415, max= 448, avg=422.58, stdev=13.36, samples=19 00:32:17.845 lat (msec) : 50=100.00% 00:32:17.845 cpu : usr=98.57%, sys=1.02%, ctx=6, majf=0, minf=9 00:32:17.845 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 filename2: (groupid=0, jobs=1): err= 0: pid=3247171: Tue Jul 16 00:58:34 2024 00:32:17.845 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10007msec) 00:32:17.845 slat (nsec): min=5959, max=53345, avg=26803.63, stdev=8158.77 00:32:17.845 clat (usec): min=29756, max=55288, avg=37660.39, stdev=958.29 00:32:17.845 lat (usec): min=29798, max=55301, avg=37687.19, stdev=957.59 00:32:17.845 clat percentiles (usec): 00:32:17.845 | 1.00th=[36963], 5.00th=[36963], 10.00th=[37487], 20.00th=[37487], 00:32:17.845 | 30.00th=[37487], 40.00th=[37487], 50.00th=[37487], 60.00th=[37487], 00:32:17.845 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38011], 95.00th=[38011], 00:32:17.845 | 99.00th=[38536], 99.50th=[38536], 99.90th=[51119], 99.95th=[51119], 00:32:17.845 | 99.99th=[55313] 00:32:17.845 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1684.11, stdev=63.91, samples=19 00:32:17.845 iops : min= 384, max= 448, avg=420.95, stdev=16.08, samples=19 00:32:17.845 lat (msec) : 50=99.62%, 100=0.38% 00:32:17.845 cpu : usr=98.67%, sys=0.91%, ctx=25, majf=0, minf=9 00:32:17.845 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:17.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.845 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:17.845 00:32:17.845 Run status group 0 (all jobs): 00:32:17.846 READ: bw=39.6MiB/s (41.6MB/s), 1688KiB/s-1723KiB/s (1728kB/s-1764kB/s), io=398MiB (417MB), run=10002-10034msec 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 bdev_null0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 [2024-07-16 00:58:34.670477] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 bdev_null1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:17.846 { 00:32:17.846 "params": { 00:32:17.846 "name": "Nvme$subsystem", 00:32:17.846 "trtype": "$TEST_TRANSPORT", 00:32:17.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.846 "adrfam": "ipv4", 00:32:17.846 "trsvcid": "$NVMF_PORT", 00:32:17.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.846 "hdgst": ${hdgst:-false}, 00:32:17.846 "ddgst": ${ddgst:-false} 00:32:17.846 }, 00:32:17.846 "method": "bdev_nvme_attach_controller" 00:32:17.846 } 00:32:17.846 EOF 00:32:17.846 )") 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:17.846 { 00:32:17.846 "params": { 00:32:17.846 "name": "Nvme$subsystem", 00:32:17.846 "trtype": "$TEST_TRANSPORT", 00:32:17.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.846 "adrfam": "ipv4", 00:32:17.846 "trsvcid": "$NVMF_PORT", 00:32:17.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.846 "hdgst": ${hdgst:-false}, 00:32:17.846 "ddgst": ${ddgst:-false} 00:32:17.846 }, 00:32:17.846 "method": "bdev_nvme_attach_controller" 00:32:17.846 } 00:32:17.846 EOF 00:32:17.846 )") 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:17.846 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:17.847 "params": { 00:32:17.847 "name": "Nvme0", 00:32:17.847 "trtype": "tcp", 00:32:17.847 "traddr": "10.0.0.2", 00:32:17.847 "adrfam": "ipv4", 00:32:17.847 "trsvcid": "4420", 00:32:17.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:17.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:17.847 "hdgst": false, 00:32:17.847 "ddgst": false 00:32:17.847 }, 00:32:17.847 "method": "bdev_nvme_attach_controller" 00:32:17.847 },{ 00:32:17.847 "params": { 00:32:17.847 "name": "Nvme1", 00:32:17.847 "trtype": "tcp", 00:32:17.847 "traddr": "10.0.0.2", 00:32:17.847 "adrfam": "ipv4", 00:32:17.847 "trsvcid": "4420", 00:32:17.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.847 "hdgst": false, 00:32:17.847 "ddgst": false 00:32:17.847 }, 00:32:17.847 "method": "bdev_nvme_attach_controller" 00:32:17.847 }' 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:17.847 00:58:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:17.847 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:17.847 ... 00:32:17.847 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:17.847 ... 00:32:17.847 fio-3.35 00:32:17.847 Starting 4 threads 00:32:17.847 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.119 00:32:23.119 filename0: (groupid=0, jobs=1): err= 0: pid=3249138: Tue Jul 16 00:58:40 2024 00:32:23.119 read: IOPS=1737, BW=13.6MiB/s (14.2MB/s)(67.9MiB/5003msec) 00:32:23.119 slat (nsec): min=9660, max=84931, avg=28271.29, stdev=7102.68 00:32:23.119 clat (usec): min=880, max=7574, avg=4517.72, stdev=660.79 00:32:23.119 lat (usec): min=903, max=7613, avg=4545.99, stdev=660.65 00:32:23.119 clat percentiles (usec): 00:32:23.119 | 1.00th=[ 3032], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4178], 00:32:23.119 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:23.119 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 6063], 00:32:23.119 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7439], 99.95th=[ 7570], 00:32:23.119 | 99.99th=[ 7570] 00:32:23.119 bw ( KiB/s): min=13168, max=14976, per=25.02%, avg=13902.40, stdev=525.78, samples=10 00:32:23.119 iops : min= 1646, max= 1872, avg=1737.80, stdev=65.72, samples=10 00:32:23.119 lat (usec) : 1000=0.01% 00:32:23.119 lat (msec) : 2=0.28%, 4=12.86%, 10=86.85% 00:32:23.119 cpu : usr=95.20%, sys=4.18%, ctx=7, majf=0, minf=9 00:32:23.119 IO depths : 1=0.1%, 2=8.8%, 4=63.0%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 issued rwts: total=8692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:23.119 filename0: (groupid=0, jobs=1): err= 0: pid=3249139: Tue Jul 16 00:58:40 2024 00:32:23.119 read: IOPS=1734, BW=13.5MiB/s (14.2MB/s)(67.8MiB/5002msec) 00:32:23.119 slat (nsec): min=9688, max=79390, avg=28497.10, stdev=7153.08 00:32:23.119 clat (usec): min=922, max=7575, avg=4527.67, stdev=521.85 00:32:23.119 lat (usec): min=952, max=7596, avg=4556.17, stdev=521.66 00:32:23.119 clat percentiles (usec): 00:32:23.119 | 1.00th=[ 3326], 5.00th=[ 3884], 10.00th=[ 4047], 20.00th=[ 4228], 00:32:23.119 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:23.119 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5407], 00:32:23.119 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7439], 00:32:23.119 | 99.99th=[ 7570] 00:32:23.119 bw ( KiB/s): min=13440, max=14544, per=24.96%, avg=13868.20, stdev=325.39, samples=10 00:32:23.119 iops : min= 1680, max= 1818, avg=1733.50, stdev=40.71, samples=10 00:32:23.119 lat (usec) : 1000=0.01% 00:32:23.119 lat (msec) : 2=0.09%, 4=8.04%, 10=91.86% 00:32:23.119 cpu : usr=95.34%, sys=4.02%, ctx=6, majf=0, minf=9 00:32:23.119 IO depths : 1=0.2%, 2=7.5%, 4=63.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 issued rwts: total=8674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:23.119 filename1: (groupid=0, jobs=1): err= 0: pid=3249140: Tue Jul 16 00:58:40 2024 00:32:23.119 read: IOPS=1725, BW=13.5MiB/s (14.1MB/s)(67.5MiB/5003msec) 00:32:23.119 slat (nsec): min=9902, max=77940, avg=28316.63, stdev=6774.24 00:32:23.119 clat (usec): min=1729, max=8204, avg=4553.21, stdev=564.41 00:32:23.119 lat (usec): min=1753, max=8266, avg=4581.53, stdev=564.34 00:32:23.119 clat percentiles (usec): 00:32:23.119 | 1.00th=[ 3326], 5.00th=[ 3851], 10.00th=[ 4080], 20.00th=[ 4228], 00:32:23.119 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:23.119 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5538], 00:32:23.119 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 8094], 00:32:23.119 | 99.99th=[ 8225] 00:32:23.119 bw ( KiB/s): min=13360, max=14176, per=24.84%, avg=13804.80, stdev=276.80, samples=10 00:32:23.119 iops : min= 1670, max= 1772, avg=1725.60, stdev=34.60, samples=10 00:32:23.119 lat (msec) : 2=0.03%, 4=7.56%, 10=92.40% 00:32:23.119 cpu : usr=95.82%, sys=3.54%, ctx=7, majf=0, minf=9 00:32:23.119 IO depths : 1=0.4%, 2=5.6%, 4=65.5%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 issued rwts: total=8634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:23.119 filename1: (groupid=0, jobs=1): err= 0: pid=3249141: Tue Jul 16 00:58:40 2024 00:32:23.119 read: IOPS=1749, BW=13.7MiB/s (14.3MB/s)(68.4MiB/5002msec) 00:32:23.119 slat (nsec): min=9201, max=44987, avg=16217.15, stdev=6102.87 00:32:23.119 clat (usec): min=1943, max=7842, avg=4526.12, stdev=605.90 00:32:23.119 lat (usec): min=1953, max=7853, avg=4542.34, stdev=605.70 00:32:23.119 clat percentiles (usec): 00:32:23.119 | 1.00th=[ 3130], 5.00th=[ 3589], 10.00th=[ 3949], 20.00th=[ 4228], 00:32:23.119 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:32:23.119 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5669], 00:32:23.119 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7504], 99.95th=[ 7570], 00:32:23.119 | 99.99th=[ 7832] 00:32:23.119 bw ( KiB/s): min=13120, max=14432, per=24.94%, avg=13856.00, stdev=428.21, samples=9 00:32:23.119 iops : min= 1640, max= 1804, avg=1732.00, stdev=53.53, samples=9 00:32:23.119 lat (msec) : 2=0.03%, 4=11.36%, 10=88.60% 00:32:23.119 cpu : usr=96.08%, sys=3.48%, ctx=10, majf=0, minf=9 00:32:23.119 IO depths : 1=0.1%, 2=4.8%, 4=68.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:23.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:23.119 issued rwts: total=8749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:23.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:23.119 00:32:23.119 Run status group 0 (all jobs): 00:32:23.119 READ: bw=54.3MiB/s (56.9MB/s), 13.5MiB/s-13.7MiB/s (14.1MB/s-14.3MB/s), io=271MiB (285MB), run=5002-5003msec 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:32:23.378 real 0m24.391s 00:32:23.378 user 5m6.701s 00:32:23.378 sys 0m5.435s 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 ************************************ 00:32:23.378 END TEST fio_dif_rand_params 00:32:23.378 ************************************ 00:32:23.378 00:58:41 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:23.378 00:58:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:23.378 00:58:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:23.378 00:58:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 ************************************ 00:32:23.378 START TEST fio_dif_digest 00:32:23.378 ************************************ 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 bdev_null0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.378 [2024-07-16 00:58:41.168196] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:23.378 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:23.378 { 00:32:23.378 "params": { 00:32:23.378 "name": "Nvme$subsystem", 00:32:23.378 "trtype": "$TEST_TRANSPORT", 00:32:23.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.378 "adrfam": "ipv4", 00:32:23.378 "trsvcid": "$NVMF_PORT", 00:32:23.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.379 "hdgst": ${hdgst:-false}, 00:32:23.379 "ddgst": ${ddgst:-false} 00:32:23.379 }, 00:32:23.379 "method": "bdev_nvme_attach_controller" 00:32:23.379 } 00:32:23.379 EOF 00:32:23.379 )") 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:23.379 "params": { 00:32:23.379 "name": "Nvme0", 00:32:23.379 "trtype": "tcp", 00:32:23.379 "traddr": "10.0.0.2", 00:32:23.379 "adrfam": "ipv4", 00:32:23.379 "trsvcid": "4420", 00:32:23.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.379 "hdgst": true, 00:32:23.379 "ddgst": true 00:32:23.379 }, 00:32:23.379 "method": "bdev_nvme_attach_controller" 00:32:23.379 }' 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:23.379 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:23.664 00:58:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:23.926 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:23.926 ... 00:32:23.926 fio-3.35 00:32:23.926 Starting 3 threads 00:32:23.926 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.112 00:32:36.112 filename0: (groupid=0, jobs=1): err= 0: pid=3250469: Tue Jul 16 00:58:52 2024 00:32:36.112 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(234MiB/10047msec) 00:32:36.112 slat (nsec): min=5748, max=44464, avg=16843.25, stdev=7497.12 00:32:36.112 clat (usec): min=10607, max=59366, avg=16091.24, stdev=2931.38 00:32:36.112 lat (usec): min=10630, max=59379, avg=16108.09, stdev=2931.42 00:32:36.112 clat percentiles (usec): 00:32:36.112 | 1.00th=[12518], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:32:36.112 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:32:36.112 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:32:36.112 | 99.00th=[18482], 99.50th=[19530], 99.90th=[59507], 99.95th=[59507], 00:32:36.112 | 99.99th=[59507] 00:32:36.112 bw ( KiB/s): min=21760, max=25088, per=33.44%, avg=23884.80, stdev=757.14, samples=20 00:32:36.112 iops : min= 170, max= 196, avg=186.60, stdev= 5.92, samples=20 00:32:36.112 lat (msec) : 20=99.57%, 50=0.05%, 100=0.37% 00:32:36.112 cpu : usr=97.37%, sys=2.31%, ctx=33, majf=0, minf=116 00:32:36.112 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.112 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.112 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:36.112 filename0: (groupid=0, jobs=1): err= 0: pid=3250470: Tue Jul 16 00:58:52 2024 00:32:36.112 read: IOPS=174, BW=21.9MiB/s (22.9MB/s)(220MiB/10048msec) 00:32:36.112 slat (usec): min=9, max=210, avg=29.61, stdev= 9.23 00:32:36.112 clat (usec): min=9247, max=58688, avg=17088.16, stdev=2458.12 00:32:36.112 lat (usec): min=9257, max=58721, avg=17117.77, stdev=2458.19 00:32:36.112 clat percentiles (usec): 00:32:36.112 | 1.00th=[11731], 5.00th=[15139], 10.00th=[15664], 20.00th=[16057], 00:32:36.112 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:32:36.112 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:32:36.112 | 99.00th=[20055], 99.50th=[20841], 99.90th=[58459], 99.95th=[58459], 00:32:36.112 | 99.99th=[58459] 00:32:36.112 bw ( KiB/s): min=20480, max=23296, per=31.46%, avg=22466.20, stdev=603.30, samples=20 00:32:36.112 iops : min= 160, max= 182, avg=175.50, stdev= 4.72, samples=20 00:32:36.112 lat (msec) : 10=0.06%, 20=98.75%, 50=0.97%, 100=0.23% 00:32:36.112 cpu : usr=95.87%, sys=3.71%, ctx=24, majf=0, minf=176 00:32:36.112 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.113 issued rwts: total=1758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:36.113 filename0: (groupid=0, jobs=1): err= 0: pid=3250471: Tue Jul 16 00:58:52 2024 00:32:36.113 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(248MiB/10049msec) 00:32:36.113 slat (usec): min=9, max=228, avg=22.52, stdev= 8.31 00:32:36.113 clat (usec): min=9135, max=60588, avg=15169.58, stdev=2402.85 00:32:36.113 lat (usec): min=9152, max=60619, avg=15192.10, stdev=2402.60 00:32:36.113 clat percentiles (usec): 00:32:36.113 | 1.00th=[10552], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:32:36.113 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:32:36.113 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:32:36.113 | 99.00th=[17695], 99.50th=[17957], 99.90th=[60556], 99.95th=[60556], 00:32:36.113 | 99.99th=[60556] 00:32:36.113 bw ( KiB/s): min=23552, max=26624, per=35.47%, avg=25331.20, stdev=666.92, samples=20 00:32:36.113 iops : min= 184, max= 208, avg=197.90, stdev= 5.21, samples=20 00:32:36.113 lat (msec) : 10=0.50%, 20=99.24%, 50=0.05%, 100=0.20% 00:32:36.113 cpu : usr=96.81%, sys=2.60%, ctx=18, majf=0, minf=219 00:32:36.113 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.113 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:36.113 00:32:36.113 Run status group 0 (all jobs): 00:32:36.113 READ: bw=69.7MiB/s (73.1MB/s), 21.9MiB/s-24.6MiB/s (22.9MB/s-25.8MB/s), io=701MiB (735MB), run=10047-10049msec 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.113 00:32:36.113 real 0m11.457s 00:32:36.113 user 0m41.600s 00:32:36.113 sys 0m1.211s 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:36.113 00:58:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:36.113 ************************************ 00:32:36.113 END TEST fio_dif_digest 00:32:36.113 ************************************ 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:36.113 00:58:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:36.113 00:58:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.113 rmmod nvme_tcp 00:32:36.113 rmmod nvme_fabrics 00:32:36.113 rmmod nvme_keyring 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3241051 ']' 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3241051 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3241051 ']' 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3241051 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3241051 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3241051' 00:32:36.113 killing process with pid 3241051 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3241051 00:32:36.113 00:58:52 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3241051 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:36.113 00:58:52 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:38.012 Waiting for block devices as requested 00:32:38.012 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:32:38.012 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:38.012 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:38.012 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:38.270 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:38.270 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:38.270 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:38.270 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:38.529 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:38.529 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:38.529 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:38.788 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:38.788 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:38.788 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:39.063 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:39.063 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:39.063 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:39.063 00:58:56 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:39.063 00:58:56 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:39.063 00:58:56 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:39.063 00:58:56 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:39.063 00:58:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.063 00:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.063 00:58:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.595 00:58:58 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:41.595 00:32:41.595 real 1m15.013s 00:32:41.595 user 7m42.667s 00:32:41.595 sys 0m19.649s 00:32:41.595 00:58:58 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:41.595 00:58:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:41.595 ************************************ 00:32:41.595 END TEST nvmf_dif 00:32:41.595 ************************************ 00:32:41.595 00:58:58 -- common/autotest_common.sh@1142 -- # return 0 00:32:41.595 00:58:58 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:41.595 00:58:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:41.595 00:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:41.595 00:58:58 -- common/autotest_common.sh@10 -- # set +x 00:32:41.595 ************************************ 00:32:41.595 START TEST nvmf_abort_qd_sizes 00:32:41.595 ************************************ 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:41.595 * Looking for test storage... 00:32:41.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.595 00:58:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:41.596 00:58:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.880 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:46.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:46.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:46.881 Found net devices under 0000:af:00.0: cvl_0_0 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:46.881 Found net devices under 0000:af:00.1: cvl_0_1 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.881 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.139 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:47.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:32:47.140 00:32:47.140 --- 10.0.0.2 ping statistics --- 00:32:47.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.140 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:32:47.140 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:32:47.140 00:32:47.140 --- 10.0.0.1 ping statistics --- 00:32:47.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.140 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:32:47.140 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.140 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:47.140 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:47.140 00:59:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:50.426 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:50.426 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:50.994 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3258664 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3258664 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3258664 ']' 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:51.283 00:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:51.283 [2024-07-16 00:59:08.947307] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:32:51.283 [2024-07-16 00:59:08.947368] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.283 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.283 [2024-07-16 00:59:09.035717] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:51.541 [2024-07-16 00:59:09.129465] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.541 [2024-07-16 00:59:09.129506] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.541 [2024-07-16 00:59:09.129516] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.541 [2024-07-16 00:59:09.129525] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.541 [2024-07-16 00:59:09.129532] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.541 [2024-07-16 00:59:09.129585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.541 [2024-07-16 00:59:09.129695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.541 [2024-07-16 00:59:09.129806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.541 [2024-07-16 00:59:09.129807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:86:00.0 ]] 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:86:00.0 ]] 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:86:00.0 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:86:00.0 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:52.106 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:52.107 00:59:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:52.365 ************************************ 00:32:52.365 START TEST spdk_target_abort 00:32:52.365 ************************************ 00:32:52.365 00:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:52.365 00:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:52.365 00:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:86:00.0 -b spdk_target 00:32:52.365 00:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.365 00:59:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.008 spdk_targetn1 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.008 [2024-07-16 00:59:12.829909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.008 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:55.266 [2024-07-16 00:59:12.874763] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.266 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:55.267 00:59:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:55.267 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.548 Initializing NVMe Controllers 00:32:58.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:58.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:58.548 Initialization complete. Launching workers. 00:32:58.548 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6629, failed: 0 00:32:58.548 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 5416 00:32:58.548 success 725, unsuccess 488, failed 0 00:32:58.548 00:59:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:58.548 00:59:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.548 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.826 Initializing NVMe Controllers 00:33:01.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:01.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:01.826 Initialization complete. Launching workers. 00:33:01.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8585, failed: 0 00:33:01.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1195, failed to submit 7390 00:33:01.826 success 332, unsuccess 863, failed 0 00:33:01.826 00:59:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:01.826 00:59:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:01.826 EAL: No free 2048 kB hugepages reported on node 1 00:33:05.109 Initializing NVMe Controllers 00:33:05.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:05.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:05.109 Initialization complete. Launching workers. 00:33:05.109 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17710, failed: 0 00:33:05.109 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2003, failed to submit 15707 00:33:05.109 success 154, unsuccess 1849, failed 0 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.109 00:59:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3258664 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3258664 ']' 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3258664 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3258664 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3258664' 00:33:06.477 killing process with pid 3258664 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3258664 00:33:06.477 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3258664 00:33:06.735 00:33:06.735 real 0m14.424s 00:33:06.735 user 0m57.919s 00:33:06.735 sys 0m2.112s 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.735 ************************************ 00:33:06.735 END TEST spdk_target_abort 00:33:06.735 ************************************ 00:33:06.735 00:59:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:06.735 00:59:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:06.735 00:59:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:06.735 00:59:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:06.735 00:59:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:06.735 ************************************ 00:33:06.735 START TEST kernel_target_abort 00:33:06.735 ************************************ 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:06.735 00:59:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:10.013 Waiting for block devices as requested 00:33:10.013 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:10.013 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:10.013 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:10.271 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:10.271 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:10.271 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:10.529 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:10.529 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:10.529 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:10.529 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:10.788 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:10.788 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:10.788 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:11.046 No valid GPT data, bailing 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 --hostid=00abaa28-3537-eb11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:11.046 00:33:11.046 Discovery Log Number of Records 2, Generation counter 2 00:33:11.046 =====Discovery Log Entry 0====== 00:33:11.046 trtype: tcp 00:33:11.046 adrfam: ipv4 00:33:11.046 subtype: current discovery subsystem 00:33:11.046 treq: not specified, sq flow control disable supported 00:33:11.046 portid: 1 00:33:11.046 trsvcid: 4420 00:33:11.046 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:11.046 traddr: 10.0.0.1 00:33:11.046 eflags: none 00:33:11.046 sectype: none 00:33:11.046 =====Discovery Log Entry 1====== 00:33:11.046 trtype: tcp 00:33:11.046 adrfam: ipv4 00:33:11.046 subtype: nvme subsystem 00:33:11.046 treq: not specified, sq flow control disable supported 00:33:11.046 portid: 1 00:33:11.046 trsvcid: 4420 00:33:11.046 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:11.046 traddr: 10.0.0.1 00:33:11.046 eflags: none 00:33:11.046 sectype: none 00:33:11.046 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:11.047 00:59:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:11.047 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.326 Initializing NVMe Controllers 00:33:14.326 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:14.326 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:14.326 Initialization complete. Launching workers. 00:33:14.326 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50292, failed: 0 00:33:14.326 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50292, failed to submit 0 00:33:14.326 success 0, unsuccess 50292, failed 0 00:33:14.326 00:59:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:14.326 00:59:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:14.326 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.609 Initializing NVMe Controllers 00:33:17.609 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:17.609 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:17.609 Initialization complete. Launching workers. 00:33:17.609 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83942, failed: 0 00:33:17.609 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21098, failed to submit 62844 00:33:17.609 success 0, unsuccess 21098, failed 0 00:33:17.609 00:59:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:17.609 00:59:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:17.609 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.889 Initializing NVMe Controllers 00:33:20.889 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:20.889 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:20.889 Initialization complete. Launching workers. 00:33:20.889 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80840, failed: 0 00:33:20.889 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20182, failed to submit 60658 00:33:20.889 success 0, unsuccess 20182, failed 0 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:20.889 00:59:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.422 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:23.422 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:23.987 0000:86:00.0 (8086 0a54): nvme -> vfio-pci 00:33:23.987 00:33:23.987 real 0m17.344s 00:33:23.987 user 0m8.302s 00:33:23.987 sys 0m5.042s 00:33:23.987 00:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:23.987 00:59:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:23.987 ************************************ 00:33:23.987 END TEST kernel_target_abort 00:33:23.987 ************************************ 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.245 rmmod nvme_tcp 00:33:24.245 rmmod nvme_fabrics 00:33:24.245 rmmod nvme_keyring 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3258664 ']' 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3258664 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3258664 ']' 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3258664 00:33:24.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3258664) - No such process 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3258664 is not found' 00:33:24.245 Process with pid 3258664 is not found 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:24.245 00:59:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:26.782 Waiting for block devices as requested 00:33:27.041 0000:86:00.0 (8086 0a54): vfio-pci -> nvme 00:33:27.041 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:27.041 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:27.311 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:27.311 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:27.311 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:27.569 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:27.569 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:27.569 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:27.569 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:27.829 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:27.829 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:27.829 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:28.087 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:28.087 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:28.087 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:28.087 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:28.346 00:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.251 00:59:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:30.251 00:33:30.251 real 0m49.054s 00:33:30.251 user 1m10.597s 00:33:30.251 sys 0m15.913s 00:33:30.251 00:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:30.251 00:59:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:30.251 ************************************ 00:33:30.251 END TEST nvmf_abort_qd_sizes 00:33:30.251 ************************************ 00:33:30.510 00:59:48 -- common/autotest_common.sh@1142 -- # return 0 00:33:30.510 00:59:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:30.510 00:59:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:30.510 00:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.510 00:59:48 -- common/autotest_common.sh@10 -- # set +x 00:33:30.510 ************************************ 00:33:30.510 START TEST keyring_file 00:33:30.510 ************************************ 00:33:30.510 00:59:48 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:30.510 * Looking for test storage... 00:33:30.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.510 00:59:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.510 00:59:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.510 00:59:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.510 00:59:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.510 00:59:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.510 00:59:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.510 00:59:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:30.510 00:59:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HTNs7LZ8xN 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HTNs7LZ8xN 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HTNs7LZ8xN 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HTNs7LZ8xN 00:33:30.510 00:59:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xiX3lfDJxX 00:33:30.510 00:59:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:30.510 00:59:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:30.768 00:59:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xiX3lfDJxX 00:33:30.768 00:59:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xiX3lfDJxX 00:33:30.768 00:59:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xiX3lfDJxX 00:33:30.768 00:59:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=3268034 00:33:30.768 00:59:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3268034 00:33:30.768 00:59:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3268034 ']' 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.768 00:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:30.768 [2024-07-16 00:59:48.447123] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:33:30.768 [2024-07-16 00:59:48.447186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268034 ] 00:33:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.768 [2024-07-16 00:59:48.528558] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.026 [2024-07-16 00:59:48.619162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.026 00:59:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.026 00:59:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:31.026 00:59:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:31.026 00:59:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.026 00:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.026 [2024-07-16 00:59:48.829575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.026 null0 00:33:31.026 [2024-07-16 00:59:48.861641] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:31.026 [2024-07-16 00:59:48.862074] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:31.284 [2024-07-16 00:59:48.869635] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.284 00:59:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.284 00:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.285 [2024-07-16 00:59:48.881667] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:31.285 request: 00:33:31.285 { 00:33:31.285 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.285 "secure_channel": false, 00:33:31.285 "listen_address": { 00:33:31.285 "trtype": "tcp", 00:33:31.285 "traddr": "127.0.0.1", 00:33:31.285 "trsvcid": "4420" 00:33:31.285 }, 00:33:31.285 "method": "nvmf_subsystem_add_listener", 00:33:31.285 "req_id": 1 00:33:31.285 } 00:33:31.285 Got JSON-RPC error response 00:33:31.285 response: 00:33:31.285 { 00:33:31.285 "code": -32602, 00:33:31.285 "message": "Invalid parameters" 00:33:31.285 } 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.285 00:59:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=3268097 00:33:31.285 00:59:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3268097 /var/tmp/bperf.sock 00:33:31.285 00:59:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3268097 ']' 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:31.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:31.285 00:59:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:31.285 [2024-07-16 00:59:48.937299] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:33:31.285 [2024-07-16 00:59:48.937354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268097 ] 00:33:31.285 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.285 [2024-07-16 00:59:49.019651] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.543 [2024-07-16 00:59:49.124027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.108 00:59:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:32.108 00:59:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:32.108 00:59:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:32.108 00:59:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:32.365 00:59:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xiX3lfDJxX 00:33:32.365 00:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xiX3lfDJxX 00:33:32.622 00:59:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:32.622 00:59:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:32.622 00:59:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.622 00:59:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.622 00:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.880 00:59:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.HTNs7LZ8xN == \/\t\m\p\/\t\m\p\.\H\T\N\s\7\L\Z\8\x\N ]] 00:33:32.880 00:59:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:32.880 00:59:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:32.880 00:59:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.880 00:59:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:32.880 00:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.137 00:59:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xiX3lfDJxX == \/\t\m\p\/\t\m\p\.\x\i\X\3\l\f\D\J\x\X ]] 00:33:33.137 00:59:50 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:33.137 00:59:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.137 00:59:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.137 00:59:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.137 00:59:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.137 00:59:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.394 00:59:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:33.394 00:59:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:33.394 00:59:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:33.394 00:59:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.394 00:59:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.394 00:59:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.394 00:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.652 00:59:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:33.652 00:59:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.652 00:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:33.910 [2024-07-16 00:59:51.607824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:33.910 nvme0n1 00:33:33.910 00:59:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:33.910 00:59:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.910 00:59:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.910 00:59:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.910 00:59:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.910 00:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.167 00:59:51 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:34.167 00:59:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:34.167 00:59:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:34.167 00:59:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.167 00:59:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.167 00:59:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:34.167 00:59:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.424 00:59:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:34.424 00:59:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.680 Running I/O for 1 seconds... 00:33:35.612 00:33:35.612 Latency(us) 00:33:35.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.612 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:35.612 nvme0n1 : 1.01 8600.93 33.60 0.00 0.00 14820.27 5928.03 22997.18 00:33:35.612 =================================================================================================================== 00:33:35.612 Total : 8600.93 33.60 0.00 0.00 14820.27 5928.03 22997.18 00:33:35.612 0 00:33:35.612 00:59:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:35.612 00:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:35.869 00:59:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:35.869 00:59:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:35.869 00:59:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:35.869 00:59:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:35.869 00:59:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:35.869 00:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.125 00:59:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:36.125 00:59:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:36.125 00:59:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:36.125 00:59:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.125 00:59:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.125 00:59:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:36.125 00:59:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.384 00:59:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:36.384 00:59:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:36.384 00:59:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:36.384 00:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:36.663 [2024-07-16 00:59:54.343952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:36.663 [2024-07-16 00:59:54.344510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7ef0 (107): Transport endpoint is not connected 00:33:36.663 [2024-07-16 00:59:54.345497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d7ef0 (9): Bad file descriptor 00:33:36.663 [2024-07-16 00:59:54.346497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:36.664 [2024-07-16 00:59:54.346514] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:36.664 [2024-07-16 00:59:54.346527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:36.664 request: 00:33:36.664 { 00:33:36.664 "name": "nvme0", 00:33:36.664 "trtype": "tcp", 00:33:36.664 "traddr": "127.0.0.1", 00:33:36.664 "adrfam": "ipv4", 00:33:36.664 "trsvcid": "4420", 00:33:36.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:36.664 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:36.664 "prchk_reftag": false, 00:33:36.664 "prchk_guard": false, 00:33:36.664 "hdgst": false, 00:33:36.664 "ddgst": false, 00:33:36.664 "psk": "key1", 00:33:36.664 "method": "bdev_nvme_attach_controller", 00:33:36.664 "req_id": 1 00:33:36.664 } 00:33:36.664 Got JSON-RPC error response 00:33:36.664 response: 00:33:36.664 { 00:33:36.664 "code": -5, 00:33:36.664 "message": "Input/output error" 00:33:36.664 } 00:33:36.664 00:59:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:36.664 00:59:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:36.664 00:59:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:36.664 00:59:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:36.664 00:59:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:36.664 00:59:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.664 00:59:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.664 00:59:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.664 00:59:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.664 00:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.945 00:59:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:36.945 00:59:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:36.945 00:59:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:36.945 00:59:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.945 00:59:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.945 00:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.945 00:59:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:37.212 00:59:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:37.212 00:59:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:37.212 00:59:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:37.775 00:59:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:37.775 00:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:38.052 00:59:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:38.052 00:59:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:38.052 00:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.052 00:59:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:38.052 00:59:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.309 00:59:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 [2024-07-16 00:59:56.120809] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HTNs7LZ8xN': 0100660 00:33:38.309 [2024-07-16 00:59:56.120851] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:38.309 request: 00:33:38.309 { 00:33:38.309 "name": "key0", 00:33:38.309 "path": "/tmp/tmp.HTNs7LZ8xN", 00:33:38.309 "method": "keyring_file_add_key", 00:33:38.309 "req_id": 1 00:33:38.309 } 00:33:38.309 Got JSON-RPC error response 00:33:38.309 response: 00:33:38.309 { 00:33:38.309 "code": -1, 00:33:38.309 "message": "Operation not permitted" 00:33:38.309 } 00:33:38.309 00:59:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:38.309 00:59:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.309 00:59:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.309 00:59:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.309 00:59:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.309 00:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HTNs7LZ8xN 00:33:38.565 00:59:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HTNs7LZ8xN 00:33:38.565 00:59:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:38.565 00:59:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:38.565 00:59:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:38.565 00:59:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:38.565 00:59:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:38.565 00:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:38.821 00:59:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:38.821 00:59:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.821 00:59:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.821 00:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.078 [2024-07-16 00:59:56.858857] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HTNs7LZ8xN': No such file or directory 00:33:39.078 [2024-07-16 00:59:56.858895] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:39.078 [2024-07-16 00:59:56.858931] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:39.078 [2024-07-16 00:59:56.858942] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:39.078 [2024-07-16 00:59:56.858953] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:39.078 request: 00:33:39.078 { 00:33:39.078 "name": "nvme0", 00:33:39.078 "trtype": "tcp", 00:33:39.078 "traddr": "127.0.0.1", 00:33:39.078 "adrfam": "ipv4", 00:33:39.078 "trsvcid": "4420", 00:33:39.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.078 "prchk_reftag": false, 00:33:39.078 "prchk_guard": false, 00:33:39.078 "hdgst": false, 00:33:39.078 "ddgst": false, 00:33:39.078 "psk": "key0", 00:33:39.078 "method": "bdev_nvme_attach_controller", 00:33:39.078 "req_id": 1 00:33:39.078 } 00:33:39.078 Got JSON-RPC error response 00:33:39.078 response: 00:33:39.078 { 00:33:39.078 "code": -19, 00:33:39.078 "message": "No such device" 00:33:39.078 } 00:33:39.078 00:59:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:39.078 00:59:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:39.078 00:59:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:39.078 00:59:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:39.078 00:59:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:39.078 00:59:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:39.334 00:59:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.600OeGGSem 00:33:39.334 00:59:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:39.334 00:59:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:39.590 00:59:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.600OeGGSem 00:33:39.590 00:59:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.600OeGGSem 00:33:39.590 00:59:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.600OeGGSem 00:33:39.590 00:59:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.600OeGGSem 00:33:39.590 00:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.600OeGGSem 00:33:39.846 00:59:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:39.846 00:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.102 nvme0n1 00:33:40.102 00:59:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:40.102 00:59:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.102 00:59:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.102 00:59:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.102 00:59:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.102 00:59:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.358 00:59:58 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:40.358 00:59:58 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:40.358 00:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:40.614 00:59:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:40.614 00:59:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:40.614 00:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.614 00:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.614 00:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.869 00:59:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:40.869 00:59:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:40.869 00:59:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.869 00:59:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.869 00:59:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.869 00:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.869 00:59:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:41.125 00:59:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:41.125 00:59:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:41.125 00:59:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:41.382 00:59:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:41.382 00:59:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.382 00:59:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:41.638 00:59:59 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:41.638 00:59:59 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.600OeGGSem 00:33:41.638 00:59:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.600OeGGSem 00:33:41.894 00:59:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xiX3lfDJxX 00:33:41.894 00:59:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xiX3lfDJxX 00:33:42.458 01:00:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.458 01:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:42.715 nvme0n1 00:33:42.715 01:00:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:42.715 01:00:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:42.973 01:00:00 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:42.973 "subsystems": [ 00:33:42.973 { 00:33:42.973 "subsystem": "keyring", 00:33:42.973 "config": [ 00:33:42.973 { 00:33:42.974 "method": "keyring_file_add_key", 00:33:42.974 "params": { 00:33:42.974 "name": "key0", 00:33:42.974 "path": "/tmp/tmp.600OeGGSem" 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "keyring_file_add_key", 00:33:42.974 "params": { 00:33:42.974 "name": "key1", 00:33:42.974 "path": "/tmp/tmp.xiX3lfDJxX" 00:33:42.974 } 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "iobuf", 00:33:42.974 "config": [ 00:33:42.974 { 00:33:42.974 "method": "iobuf_set_options", 00:33:42.974 "params": { 00:33:42.974 "small_pool_count": 8192, 00:33:42.974 "large_pool_count": 1024, 00:33:42.974 "small_bufsize": 8192, 00:33:42.974 "large_bufsize": 135168 00:33:42.974 } 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "sock", 00:33:42.974 "config": [ 00:33:42.974 { 00:33:42.974 "method": "sock_set_default_impl", 00:33:42.974 "params": { 00:33:42.974 "impl_name": "posix" 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "sock_impl_set_options", 00:33:42.974 "params": { 00:33:42.974 "impl_name": "ssl", 00:33:42.974 "recv_buf_size": 4096, 00:33:42.974 "send_buf_size": 4096, 00:33:42.974 "enable_recv_pipe": true, 00:33:42.974 "enable_quickack": false, 00:33:42.974 "enable_placement_id": 0, 00:33:42.974 "enable_zerocopy_send_server": true, 00:33:42.974 "enable_zerocopy_send_client": false, 00:33:42.974 "zerocopy_threshold": 0, 00:33:42.974 "tls_version": 0, 00:33:42.974 "enable_ktls": false 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "sock_impl_set_options", 00:33:42.974 "params": { 00:33:42.974 "impl_name": "posix", 00:33:42.974 "recv_buf_size": 2097152, 00:33:42.974 "send_buf_size": 2097152, 00:33:42.974 "enable_recv_pipe": true, 00:33:42.974 "enable_quickack": false, 00:33:42.974 "enable_placement_id": 0, 00:33:42.974 "enable_zerocopy_send_server": true, 00:33:42.974 "enable_zerocopy_send_client": false, 00:33:42.974 "zerocopy_threshold": 0, 00:33:42.974 "tls_version": 0, 00:33:42.974 "enable_ktls": false 00:33:42.974 } 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "vmd", 00:33:42.974 "config": [] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "accel", 00:33:42.974 "config": [ 00:33:42.974 { 00:33:42.974 "method": "accel_set_options", 00:33:42.974 "params": { 00:33:42.974 "small_cache_size": 128, 00:33:42.974 "large_cache_size": 16, 00:33:42.974 "task_count": 2048, 00:33:42.974 "sequence_count": 2048, 00:33:42.974 "buf_count": 2048 00:33:42.974 } 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "bdev", 00:33:42.974 "config": [ 00:33:42.974 { 00:33:42.974 "method": "bdev_set_options", 00:33:42.974 "params": { 00:33:42.974 "bdev_io_pool_size": 65535, 00:33:42.974 "bdev_io_cache_size": 256, 00:33:42.974 "bdev_auto_examine": true, 00:33:42.974 "iobuf_small_cache_size": 128, 00:33:42.974 "iobuf_large_cache_size": 16 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_raid_set_options", 00:33:42.974 "params": { 00:33:42.974 "process_window_size_kb": 1024 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_iscsi_set_options", 00:33:42.974 "params": { 00:33:42.974 "timeout_sec": 30 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_nvme_set_options", 00:33:42.974 "params": { 00:33:42.974 "action_on_timeout": "none", 00:33:42.974 "timeout_us": 0, 00:33:42.974 "timeout_admin_us": 0, 00:33:42.974 "keep_alive_timeout_ms": 10000, 00:33:42.974 "arbitration_burst": 0, 00:33:42.974 "low_priority_weight": 0, 00:33:42.974 "medium_priority_weight": 0, 00:33:42.974 "high_priority_weight": 0, 00:33:42.974 "nvme_adminq_poll_period_us": 10000, 00:33:42.974 "nvme_ioq_poll_period_us": 0, 00:33:42.974 "io_queue_requests": 512, 00:33:42.974 "delay_cmd_submit": true, 00:33:42.974 "transport_retry_count": 4, 00:33:42.974 "bdev_retry_count": 3, 00:33:42.974 "transport_ack_timeout": 0, 00:33:42.974 "ctrlr_loss_timeout_sec": 0, 00:33:42.974 "reconnect_delay_sec": 0, 00:33:42.974 "fast_io_fail_timeout_sec": 0, 00:33:42.974 "disable_auto_failback": false, 00:33:42.974 "generate_uuids": false, 00:33:42.974 "transport_tos": 0, 00:33:42.974 "nvme_error_stat": false, 00:33:42.974 "rdma_srq_size": 0, 00:33:42.974 "io_path_stat": false, 00:33:42.974 "allow_accel_sequence": false, 00:33:42.974 "rdma_max_cq_size": 0, 00:33:42.974 "rdma_cm_event_timeout_ms": 0, 00:33:42.974 "dhchap_digests": [ 00:33:42.974 "sha256", 00:33:42.974 "sha384", 00:33:42.974 "sha512" 00:33:42.974 ], 00:33:42.974 "dhchap_dhgroups": [ 00:33:42.974 "null", 00:33:42.974 "ffdhe2048", 00:33:42.974 "ffdhe3072", 00:33:42.974 "ffdhe4096", 00:33:42.974 "ffdhe6144", 00:33:42.974 "ffdhe8192" 00:33:42.974 ] 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_nvme_attach_controller", 00:33:42.974 "params": { 00:33:42.974 "name": "nvme0", 00:33:42.974 "trtype": "TCP", 00:33:42.974 "adrfam": "IPv4", 00:33:42.974 "traddr": "127.0.0.1", 00:33:42.974 "trsvcid": "4420", 00:33:42.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.974 "prchk_reftag": false, 00:33:42.974 "prchk_guard": false, 00:33:42.974 "ctrlr_loss_timeout_sec": 0, 00:33:42.974 "reconnect_delay_sec": 0, 00:33:42.974 "fast_io_fail_timeout_sec": 0, 00:33:42.974 "psk": "key0", 00:33:42.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.974 "hdgst": false, 00:33:42.974 "ddgst": false 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_nvme_set_hotplug", 00:33:42.974 "params": { 00:33:42.974 "period_us": 100000, 00:33:42.974 "enable": false 00:33:42.974 } 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "method": "bdev_wait_for_examine" 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }, 00:33:42.974 { 00:33:42.974 "subsystem": "nbd", 00:33:42.974 "config": [] 00:33:42.974 } 00:33:42.974 ] 00:33:42.974 }' 00:33:42.974 01:00:00 keyring_file -- keyring/file.sh@114 -- # killprocess 3268097 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3268097 ']' 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3268097 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268097 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268097' 00:33:42.974 killing process with pid 3268097 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@967 -- # kill 3268097 00:33:42.974 Received shutdown signal, test time was about 1.000000 seconds 00:33:42.974 00:33:42.974 Latency(us) 00:33:42.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.974 =================================================================================================================== 00:33:42.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:42.974 01:00:00 keyring_file -- common/autotest_common.sh@972 -- # wait 3268097 00:33:43.232 01:00:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=3270397 00:33:43.232 01:00:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3270397 /var/tmp/bperf.sock 00:33:43.232 01:00:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3270397 ']' 00:33:43.232 01:00:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.232 01:00:00 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:43.232 01:00:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.232 01:00:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.232 01:00:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:43.232 "subsystems": [ 00:33:43.232 { 00:33:43.232 "subsystem": "keyring", 00:33:43.232 "config": [ 00:33:43.232 { 00:33:43.232 "method": "keyring_file_add_key", 00:33:43.232 "params": { 00:33:43.232 "name": "key0", 00:33:43.232 "path": "/tmp/tmp.600OeGGSem" 00:33:43.232 } 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "method": "keyring_file_add_key", 00:33:43.232 "params": { 00:33:43.232 "name": "key1", 00:33:43.232 "path": "/tmp/tmp.xiX3lfDJxX" 00:33:43.232 } 00:33:43.232 } 00:33:43.232 ] 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "subsystem": "iobuf", 00:33:43.232 "config": [ 00:33:43.232 { 00:33:43.232 "method": "iobuf_set_options", 00:33:43.232 "params": { 00:33:43.232 "small_pool_count": 8192, 00:33:43.232 "large_pool_count": 1024, 00:33:43.232 "small_bufsize": 8192, 00:33:43.232 "large_bufsize": 135168 00:33:43.232 } 00:33:43.232 } 00:33:43.232 ] 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "subsystem": "sock", 00:33:43.232 "config": [ 00:33:43.232 { 00:33:43.232 "method": "sock_set_default_impl", 00:33:43.232 "params": { 00:33:43.232 "impl_name": "posix" 00:33:43.232 } 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "method": "sock_impl_set_options", 00:33:43.232 "params": { 00:33:43.232 "impl_name": "ssl", 00:33:43.232 "recv_buf_size": 4096, 00:33:43.232 "send_buf_size": 4096, 00:33:43.232 "enable_recv_pipe": true, 00:33:43.232 "enable_quickack": false, 00:33:43.232 "enable_placement_id": 0, 00:33:43.232 "enable_zerocopy_send_server": true, 00:33:43.232 "enable_zerocopy_send_client": false, 00:33:43.232 "zerocopy_threshold": 0, 00:33:43.232 "tls_version": 0, 00:33:43.232 "enable_ktls": false 00:33:43.232 } 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "method": "sock_impl_set_options", 00:33:43.232 "params": { 00:33:43.232 "impl_name": "posix", 00:33:43.232 "recv_buf_size": 2097152, 00:33:43.232 "send_buf_size": 2097152, 00:33:43.232 "enable_recv_pipe": true, 00:33:43.232 "enable_quickack": false, 00:33:43.232 "enable_placement_id": 0, 00:33:43.232 "enable_zerocopy_send_server": true, 00:33:43.232 "enable_zerocopy_send_client": false, 00:33:43.232 "zerocopy_threshold": 0, 00:33:43.232 "tls_version": 0, 00:33:43.232 "enable_ktls": false 00:33:43.232 } 00:33:43.232 } 00:33:43.232 ] 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "subsystem": "vmd", 00:33:43.232 "config": [] 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "subsystem": "accel", 00:33:43.232 "config": [ 00:33:43.232 { 00:33:43.232 "method": "accel_set_options", 00:33:43.232 "params": { 00:33:43.232 "small_cache_size": 128, 00:33:43.232 "large_cache_size": 16, 00:33:43.232 "task_count": 2048, 00:33:43.232 "sequence_count": 2048, 00:33:43.232 "buf_count": 2048 00:33:43.232 } 00:33:43.232 } 00:33:43.232 ] 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "subsystem": "bdev", 00:33:43.232 "config": [ 00:33:43.232 { 00:33:43.232 "method": "bdev_set_options", 00:33:43.232 "params": { 00:33:43.232 "bdev_io_pool_size": 65535, 00:33:43.232 "bdev_io_cache_size": 256, 00:33:43.232 "bdev_auto_examine": true, 00:33:43.232 "iobuf_small_cache_size": 128, 00:33:43.232 "iobuf_large_cache_size": 16 00:33:43.232 } 00:33:43.232 }, 00:33:43.232 { 00:33:43.232 "method": "bdev_raid_set_options", 00:33:43.233 "params": { 00:33:43.233 "process_window_size_kb": 1024 00:33:43.233 } 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "method": "bdev_iscsi_set_options", 00:33:43.233 "params": { 00:33:43.233 "timeout_sec": 30 00:33:43.233 } 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "method": "bdev_nvme_set_options", 00:33:43.233 "params": { 00:33:43.233 "action_on_timeout": "none", 00:33:43.233 "timeout_us": 0, 00:33:43.233 "timeout_admin_us": 0, 00:33:43.233 "keep_alive_timeout_ms": 10000, 00:33:43.233 "arbitration_burst": 0, 00:33:43.233 "low_priority_weight": 0, 00:33:43.233 "medium_priority_weight": 0, 00:33:43.233 "high_priority_weight": 0, 00:33:43.233 "nvme_adminq_poll_period_us": 10000, 00:33:43.233 "nvme_ioq_poll_period_us": 0, 00:33:43.233 "io_queue_requests": 512, 00:33:43.233 "delay_cmd_submit": true, 00:33:43.233 "transport_retry_count": 4, 00:33:43.233 "bdev_retry_count": 3, 00:33:43.233 "transport_ack_timeout": 0, 00:33:43.233 "ctrlr_loss_timeout_sec": 0, 00:33:43.233 "reconnect_delay_sec": 0, 00:33:43.233 "fast_io_fail_timeout_sec": 0, 00:33:43.233 "disable_auto_failback": false, 00:33:43.233 "generate_uuids": false, 00:33:43.233 "transport_tos": 0, 00:33:43.233 "nvme_error_stat": false, 00:33:43.233 "rdma_srq_size": 0, 00:33:43.233 "io_path_stat": false, 00:33:43.233 "allow_accel_sequence": false, 00:33:43.233 "rdma_max_cq_size": 0, 00:33:43.233 "rdma_cm_event_timeout_ms": 0, 00:33:43.233 "dhchap_digests": [ 00:33:43.233 "sha256", 00:33:43.233 "sha384", 00:33:43.233 "sha512" 00:33:43.233 ], 00:33:43.233 "dhchap_dhgroups": [ 00:33:43.233 "null", 00:33:43.233 "ffdhe2048", 00:33:43.233 "ffdhe3072", 00:33:43.233 "ffdhe4096", 00:33:43.233 "ffdhe6144", 00:33:43.233 "ffdhe8192" 00:33:43.233 ] 00:33:43.233 } 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "method": "bdev_nvme_attach_controller", 00:33:43.233 "params": { 00:33:43.233 "name": "nvme0", 00:33:43.233 "trtype": "TCP", 00:33:43.233 "adrfam": "IPv4", 00:33:43.233 "traddr": "127.0.0.1", 00:33:43.233 "trsvcid": "4420", 00:33:43.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.233 "prchk_reftag": false, 00:33:43.233 "prchk_guard": false, 00:33:43.233 "ctrlr_loss_timeout_sec": 0, 00:33:43.233 "reconnect_delay_sec": 0, 00:33:43.233 "fast_io_fail_timeout_sec": 0, 00:33:43.233 "psk": "key0", 00:33:43.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.233 "hdgst": false, 00:33:43.233 "ddgst": false 00:33:43.233 } 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "method": "bdev_nvme_set_hotplug", 00:33:43.233 "params": { 00:33:43.233 "period_us": 100000, 00:33:43.233 "enable": false 00:33:43.233 } 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "method": "bdev_wait_for_examine" 00:33:43.233 } 00:33:43.233 ] 00:33:43.233 }, 00:33:43.233 { 00:33:43.233 "subsystem": "nbd", 00:33:43.233 "config": [] 00:33:43.233 } 00:33:43.233 ] 00:33:43.233 }' 00:33:43.233 01:00:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.233 01:00:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:43.233 [2024-07-16 01:00:01.005025] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:33:43.233 [2024-07-16 01:00:01.005086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270397 ] 00:33:43.233 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.491 [2024-07-16 01:00:01.088391] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.491 [2024-07-16 01:00:01.193003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.748 [2024-07-16 01:00:01.366108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:44.312 01:00:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.312 01:00:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:44.312 01:00:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:44.312 01:00:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.312 01:00:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:44.568 01:00:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:44.568 01:00:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:44.568 01:00:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.568 01:00:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:44.568 01:00:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.568 01:00:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.568 01:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.824 01:00:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:44.824 01:00:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:44.824 01:00:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:44.824 01:00:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.824 01:00:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:44.824 01:00:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.824 01:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.081 01:00:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:45.081 01:00:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:45.081 01:00:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:45.081 01:00:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:45.338 01:00:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:45.338 01:00:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:45.338 01:00:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.600OeGGSem /tmp/tmp.xiX3lfDJxX 00:33:45.338 01:00:02 keyring_file -- keyring/file.sh@20 -- # killprocess 3270397 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3270397 ']' 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3270397 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270397 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270397' 00:33:45.338 killing process with pid 3270397 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@967 -- # kill 3270397 00:33:45.338 Received shutdown signal, test time was about 1.000000 seconds 00:33:45.338 00:33:45.338 Latency(us) 00:33:45.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.338 =================================================================================================================== 00:33:45.338 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:45.338 01:00:02 keyring_file -- common/autotest_common.sh@972 -- # wait 3270397 00:33:45.596 01:00:03 keyring_file -- keyring/file.sh@21 -- # killprocess 3268034 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3268034 ']' 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3268034 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268034 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268034' 00:33:45.596 killing process with pid 3268034 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@967 -- # kill 3268034 00:33:45.596 [2024-07-16 01:00:03.267221] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:45.596 01:00:03 keyring_file -- common/autotest_common.sh@972 -- # wait 3268034 00:33:45.856 00:33:45.856 real 0m15.464s 00:33:45.856 user 0m38.998s 00:33:45.856 sys 0m3.228s 00:33:45.856 01:00:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:45.856 01:00:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:45.856 ************************************ 00:33:45.856 END TEST keyring_file 00:33:45.856 ************************************ 00:33:45.856 01:00:03 -- common/autotest_common.sh@1142 -- # return 0 00:33:45.856 01:00:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:45.856 01:00:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:45.856 01:00:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:45.856 01:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:45.856 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:33:45.856 ************************************ 00:33:45.856 START TEST keyring_linux 00:33:45.856 ************************************ 00:33:45.856 01:00:03 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:46.115 * Looking for test storage... 00:33:46.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:46.115 01:00:03 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:46.115 01:00:03 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00abaa28-3537-eb11-906e-0017a4403562 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00abaa28-3537-eb11-906e-0017a4403562 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.115 01:00:03 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.115 01:00:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.115 01:00:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.115 01:00:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.115 01:00:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.115 01:00:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.115 01:00:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.115 01:00:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:46.116 01:00:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:46.116 /tmp/:spdk-test:key0 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:46.116 01:00:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:46.116 01:00:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:46.116 /tmp/:spdk-test:key1 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3271050 00:33:46.116 01:00:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3271050 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3271050 ']' 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.116 01:00:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:46.116 [2024-07-16 01:00:03.943419] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:33:46.116 [2024-07-16 01:00:03.943484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271050 ] 00:33:46.375 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.375 [2024-07-16 01:00:04.027815] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.375 [2024-07-16 01:00:04.114705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 [2024-07-16 01:00:04.340896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.634 null0 00:33:46.634 [2024-07-16 01:00:04.372932] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:46.634 [2024-07-16 01:00:04.373312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:46.634 811017951 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:46.634 639552074 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3271084 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3271084 /var/tmp/bperf.sock 00:33:46.634 01:00:04 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3271084 ']' 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.634 01:00:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 [2024-07-16 01:00:04.447501] Starting SPDK v24.09-pre git sha1 e9e51ebfe / DPDK 24.03.0 initialization... 00:33:46.634 [2024-07-16 01:00:04.447556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271084 ] 00:33:46.892 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.892 [2024-07-16 01:00:04.529805] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.892 [2024-07-16 01:00:04.634194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.825 01:00:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.825 01:00:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:47.825 01:00:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:47.825 01:00:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:47.825 01:00:05 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:47.825 01:00:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:48.391 01:00:05 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:48.391 01:00:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:48.391 [2024-07-16 01:00:06.179271] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:48.648 nvme0n1 00:33:48.648 01:00:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:48.648 01:00:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:48.648 01:00:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:48.648 01:00:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:48.648 01:00:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.648 01:00:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:48.904 01:00:06 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:48.904 01:00:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:48.904 01:00:06 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:48.904 01:00:06 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:48.904 01:00:06 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:48.904 01:00:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:48.904 01:00:06 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@25 -- # sn=811017951 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@26 -- # [[ 811017951 == \8\1\1\0\1\7\9\5\1 ]] 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 811017951 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:49.161 01:00:06 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.161 Running I/O for 1 seconds... 00:33:50.092 00:33:50.092 Latency(us) 00:33:50.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.092 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:50.092 nvme0n1 : 1.01 8385.04 32.75 0.00 0.00 15174.25 11081.54 27048.49 00:33:50.092 =================================================================================================================== 00:33:50.092 Total : 8385.04 32.75 0.00 0.00 15174.25 11081.54 27048.49 00:33:50.092 0 00:33:50.092 01:00:07 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:50.092 01:00:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:50.674 01:00:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:50.674 01:00:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:50.674 01:00:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:50.674 01:00:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:50.674 01:00:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:50.674 01:00:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:50.937 01:00:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:50.937 01:00:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:50.937 01:00:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:50.937 01:00:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:50.937 01:00:08 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:50.937 01:00:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:51.207 [2024-07-16 01:00:08.909215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:51.207 [2024-07-16 01:00:08.909503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2711410 (107): Transport endpoint is not connected 00:33:51.207 [2024-07-16 01:00:08.910494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2711410 (9): Bad file descriptor 00:33:51.207 [2024-07-16 01:00:08.911493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:51.207 [2024-07-16 01:00:08.911510] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:51.207 [2024-07-16 01:00:08.911522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:51.207 request: 00:33:51.207 { 00:33:51.207 "name": "nvme0", 00:33:51.207 "trtype": "tcp", 00:33:51.207 "traddr": "127.0.0.1", 00:33:51.207 "adrfam": "ipv4", 00:33:51.207 "trsvcid": "4420", 00:33:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:51.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:51.207 "prchk_reftag": false, 00:33:51.207 "prchk_guard": false, 00:33:51.207 "hdgst": false, 00:33:51.207 "ddgst": false, 00:33:51.207 "psk": ":spdk-test:key1", 00:33:51.207 "method": "bdev_nvme_attach_controller", 00:33:51.207 "req_id": 1 00:33:51.207 } 00:33:51.207 Got JSON-RPC error response 00:33:51.207 response: 00:33:51.207 { 00:33:51.207 "code": -5, 00:33:51.207 "message": "Input/output error" 00:33:51.207 } 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@33 -- # sn=811017951 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 811017951 00:33:51.207 1 links removed 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@33 -- # sn=639552074 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 639552074 00:33:51.207 1 links removed 00:33:51.207 01:00:08 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3271084 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3271084 ']' 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3271084 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3271084 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3271084' 00:33:51.207 killing process with pid 3271084 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@967 -- # kill 3271084 00:33:51.207 Received shutdown signal, test time was about 1.000000 seconds 00:33:51.207 00:33:51.207 Latency(us) 00:33:51.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.207 =================================================================================================================== 00:33:51.207 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.207 01:00:08 keyring_linux -- common/autotest_common.sh@972 -- # wait 3271084 00:33:51.467 01:00:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3271050 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3271050 ']' 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3271050 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3271050 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3271050' 00:33:51.467 killing process with pid 3271050 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 3271050 00:33:51.467 01:00:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 3271050 00:33:52.044 00:33:52.044 real 0m5.927s 00:33:52.044 user 0m11.842s 00:33:52.044 sys 0m1.602s 00:33:52.045 01:00:09 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.045 01:00:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:52.045 ************************************ 00:33:52.045 END TEST keyring_linux 00:33:52.045 ************************************ 00:33:52.045 01:00:09 -- common/autotest_common.sh@1142 -- # return 0 00:33:52.045 01:00:09 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:52.045 01:00:09 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:52.045 01:00:09 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:52.045 01:00:09 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:52.045 01:00:09 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:52.045 01:00:09 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:52.045 01:00:09 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:52.045 01:00:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:52.045 01:00:09 -- common/autotest_common.sh@10 -- # set +x 00:33:52.045 01:00:09 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:52.045 01:00:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:52.045 01:00:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:52.045 01:00:09 -- common/autotest_common.sh@10 -- # set +x 00:33:57.363 INFO: APP EXITING 00:33:57.363 INFO: killing all VMs 00:33:57.363 INFO: killing vhost app 00:33:57.363 WARN: no vhost pid file found 00:33:57.363 INFO: EXIT DONE 00:34:00.672 0000:86:00.0 (8086 0a54): Already using the nvme driver 00:34:00.672 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:00.672 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:03.235 Cleaning 00:34:03.235 Removing: /var/run/dpdk/spdk0/config 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:03.235 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:03.235 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:03.235 Removing: /var/run/dpdk/spdk1/config 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:03.235 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:03.235 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:03.235 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:03.235 Removing: /var/run/dpdk/spdk2/config 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:03.235 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:03.235 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:03.235 Removing: /var/run/dpdk/spdk3/config 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:03.235 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:03.235 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:03.235 Removing: /var/run/dpdk/spdk4/config 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:03.235 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:03.235 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:03.235 Removing: /dev/shm/bdev_svc_trace.1 00:34:03.235 Removing: /dev/shm/nvmf_trace.0 00:34:03.235 Removing: /dev/shm/spdk_tgt_trace.pid2837846 00:34:03.235 Removing: /var/run/dpdk/spdk0 00:34:03.235 Removing: /var/run/dpdk/spdk1 00:34:03.235 Removing: /var/run/dpdk/spdk2 00:34:03.235 Removing: /var/run/dpdk/spdk3 00:34:03.235 Removing: /var/run/dpdk/spdk4 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2835418 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2836650 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2837846 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2838542 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2839618 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2839887 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2840985 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2841008 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2841373 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2843334 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2844507 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2844934 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2845388 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2845725 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2846041 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2846325 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2846608 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2846927 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2847784 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2851394 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2851692 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2851924 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2852067 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2852581 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2852822 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2853388 00:34:03.235 Removing: /var/run/dpdk/spdk_pid2853647 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2853964 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2854206 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2854496 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2854640 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2855214 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2855435 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2855768 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2856144 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2856340 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2856436 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2856791 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2857070 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2857356 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2857635 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2858048 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2858569 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2858885 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2859173 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2859460 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2859755 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2860041 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2860364 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2860654 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2860948 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2861235 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2861535 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2861841 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2862159 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2862452 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2862771 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2862855 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2863217 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2867181 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2915574 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2920369 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2931427 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2937025 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2941285 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2942006 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2948606 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2955621 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2955651 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2956514 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2957617 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2959044 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2959714 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2959840 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2960100 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2960160 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2960328 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2961157 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2962201 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2963242 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2963772 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2963783 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2964109 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2965448 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2966562 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2975505 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2975960 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2980586 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2986973 00:34:03.493 Removing: /var/run/dpdk/spdk_pid2990164 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3001555 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3011494 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3013326 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3014372 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3032615 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3036551 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3073475 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3078557 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3080190 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3082256 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3082541 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3082862 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3083198 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3084059 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3086520 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3087904 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3088720 00:34:03.493 Removing: /var/run/dpdk/spdk_pid3091123 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3091942 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3092758 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3097085 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3107722 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3112165 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3118703 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3120171 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3121876 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3126498 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3130854 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3139095 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3139097 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3144041 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3144267 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3144521 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3144960 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3144983 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3149772 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3150310 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3154901 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3157791 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3163636 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3169584 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3179944 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3187850 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3187866 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3207884 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3208679 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3209352 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3210034 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3211123 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3211912 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3212708 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3213510 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3218060 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3218325 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3224599 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3224773 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3227384 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3236024 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3236034 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3241357 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3243512 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3245580 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3246774 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3249016 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3250219 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3259419 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3259941 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3260495 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3262931 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3263464 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3263996 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3268034 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3268097 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3270397 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3271050 00:34:03.751 Removing: /var/run/dpdk/spdk_pid3271084 00:34:03.751 Clean 00:34:03.751 01:00:21 -- common/autotest_common.sh@1451 -- # return 0 00:34:03.751 01:00:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:03.751 01:00:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:03.751 01:00:21 -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 01:00:21 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:04.008 01:00:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:04.008 01:00:21 -- common/autotest_common.sh@10 -- # set +x 00:34:04.008 01:00:21 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:04.008 01:00:21 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:04.008 01:00:21 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:04.008 01:00:21 -- spdk/autotest.sh@391 -- # hash lcov 00:34:04.008 01:00:21 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:04.008 01:00:21 -- spdk/autotest.sh@393 -- # hostname 00:34:04.008 01:00:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-16 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:04.264 geninfo: WARNING: invalid characters removed from testname! 00:34:36.359 01:00:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:43.003 01:00:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.278 01:01:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:51.562 01:01:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:54.093 01:01:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:57.376 01:01:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:03.949 01:01:20 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:03.949 01:01:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.949 01:01:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:03.949 01:01:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.949 01:01:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.949 01:01:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.949 01:01:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.949 01:01:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.949 01:01:20 -- paths/export.sh@5 -- $ export PATH 00:35:03.949 01:01:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.950 01:01:20 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:35:03.950 01:01:20 -- common/autobuild_common.sh@444 -- $ date +%s 00:35:03.950 01:01:20 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721084480.XXXXXX 00:35:03.950 01:01:20 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721084480.n0M3wA 00:35:03.950 01:01:20 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:35:03.950 01:01:20 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:35:03.950 01:01:20 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:35:03.950 01:01:20 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:03.950 01:01:20 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:03.950 01:01:20 -- common/autobuild_common.sh@460 -- $ get_config_params 00:35:03.950 01:01:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:35:03.950 01:01:20 -- common/autotest_common.sh@10 -- $ set +x 00:35:03.950 01:01:20 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:35:03.950 01:01:20 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:35:03.950 01:01:20 -- pm/common@17 -- $ local monitor 00:35:03.950 01:01:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.950 01:01:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.950 01:01:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.950 01:01:20 -- pm/common@21 -- $ date +%s 00:35:03.950 01:01:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.950 01:01:20 -- pm/common@21 -- $ date +%s 00:35:03.950 01:01:20 -- pm/common@25 -- $ sleep 1 00:35:03.950 01:01:20 -- pm/common@21 -- $ date +%s 00:35:03.950 01:01:20 -- pm/common@21 -- $ date +%s 00:35:03.950 01:01:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084480 00:35:03.950 01:01:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084480 00:35:03.950 01:01:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084480 00:35:03.950 01:01:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721084480 00:35:03.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084480_collect-vmstat.pm.log 00:35:03.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084480_collect-cpu-load.pm.log 00:35:03.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084480_collect-cpu-temp.pm.log 00:35:03.950 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721084480_collect-bmc-pm.bmc.pm.log 00:35:04.208 01:01:21 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:35:04.208 01:01:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:35:04.208 01:01:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:04.208 01:01:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:04.208 01:01:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:04.208 01:01:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:04.208 01:01:21 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:04.208 01:01:21 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:04.208 01:01:21 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:04.208 01:01:22 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:04.208 01:01:22 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:04.208 01:01:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:04.208 01:01:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:04.208 01:01:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.208 01:01:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:04.208 01:01:22 -- pm/common@44 -- $ pid=3282658 00:35:04.208 01:01:22 -- pm/common@50 -- $ kill -TERM 3282658 00:35:04.208 01:01:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.208 01:01:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:04.208 01:01:22 -- pm/common@44 -- $ pid=3282659 00:35:04.208 01:01:22 -- pm/common@50 -- $ kill -TERM 3282659 00:35:04.208 01:01:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.208 01:01:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:04.208 01:01:22 -- pm/common@44 -- $ pid=3282661 00:35:04.208 01:01:22 -- pm/common@50 -- $ kill -TERM 3282661 00:35:04.208 01:01:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.208 01:01:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:04.208 01:01:22 -- pm/common@44 -- $ pid=3282687 00:35:04.208 01:01:22 -- pm/common@50 -- $ sudo -E kill -TERM 3282687 00:35:04.466 + [[ -n 2721823 ]] 00:35:04.466 + sudo kill 2721823 00:35:04.475 [Pipeline] } 00:35:04.494 [Pipeline] // stage 00:35:04.499 [Pipeline] } 00:35:04.517 [Pipeline] // timeout 00:35:04.522 [Pipeline] } 00:35:04.539 [Pipeline] // catchError 00:35:04.544 [Pipeline] } 00:35:04.562 [Pipeline] // wrap 00:35:04.566 [Pipeline] } 00:35:04.581 [Pipeline] // catchError 00:35:04.590 [Pipeline] stage 00:35:04.592 [Pipeline] { (Epilogue) 00:35:04.606 [Pipeline] catchError 00:35:04.607 [Pipeline] { 00:35:04.620 [Pipeline] echo 00:35:04.622 Cleanup processes 00:35:04.629 [Pipeline] sh 00:35:04.909 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:04.910 3282770 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:35:04.910 3283106 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:04.923 [Pipeline] sh 00:35:05.204 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:05.204 ++ grep -v 'sudo pgrep' 00:35:05.204 ++ awk '{print $1}' 00:35:05.204 + sudo kill -9 3282770 00:35:05.215 [Pipeline] sh 00:35:05.499 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:20.392 [Pipeline] sh 00:35:20.675 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:20.675 Artifacts sizes are good 00:35:20.688 [Pipeline] archiveArtifacts 00:35:20.695 Archiving artifacts 00:35:20.903 [Pipeline] sh 00:35:21.218 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:21.231 [Pipeline] cleanWs 00:35:21.240 [WS-CLEANUP] Deleting project workspace... 00:35:21.240 [WS-CLEANUP] Deferred wipeout is used... 00:35:21.247 [WS-CLEANUP] done 00:35:21.248 [Pipeline] } 00:35:21.265 [Pipeline] // catchError 00:35:21.276 [Pipeline] sh 00:35:21.557 + logger -p user.info -t JENKINS-CI 00:35:21.564 [Pipeline] } 00:35:21.578 [Pipeline] // stage 00:35:21.582 [Pipeline] } 00:35:21.601 [Pipeline] // node 00:35:21.606 [Pipeline] End of Pipeline 00:35:21.649 Finished: SUCCESS